Cached plans and statement generalization

Started by Konstantin Knizhnikover 8 years ago107 messages
#1Konstantin Knizhnik
k.knizhnik@postgrespro.ru

Hi hackers,

There were a lot of discussions about query plan caching in hackers
mailing list, but I failed to find some clear answer for my question and
the current consensus on this question in Postgres community. As far as
I understand current state is the following:
1. We have per-connection prepared statements.
2. Queries executed inside plpgsql code are implicitly prepared.

It is not always possible or convenient to use prepared statements.
For example, if pgbouncer is used to perform connection pooling.
Another use case (which is actually the problem I am trying to solve
now) is partitioning.
Efficient execution of query to partitioned table requires hardcoded
value for partitioning key.
Only in this case optimizer will be able to construct efficient query
plan which access only affected tables (partitions).

My small benchmark for distributed partitioned table based on pg_pathman
+ postgres_fdw shows 3 times degrade of performance in case of using
prepared statements.
But without prepared statements substantial amount of time is spent in
query compilation and planning. I was be able to speed up benchmark more
than two time by
sending prepared queries directly to the remote nodes.

So what I am thinking now is implicit query caching. If the same query
with different literal values is repeated many times, then we can try to
generalize this query and replace it with prepared query with
parameters. I am not considering now shared query cache: is seems to be
much harder to implement. But local caching of generalized queries seems
to be not so difficult to implement and requires not so much changes in
Postgres code. And it can be useful not only for sharding, but for many
other cases where prepared statements can not be used.

I wonder if such option was already considered and if it was for some
reasons rejected: can you point me at this reasons?

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#2Alexander Korotkov
a.korotkov@postgrespro.ru
In reply to: Konstantin Knizhnik (#1)
Re: Cached plans and statement generalization

Hi, Konstantin!

On Mon, Apr 24, 2017 at 11:46 AM, Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

There were a lot of discussions about query plan caching in hackers
mailing list, but I failed to find some clear answer for my question and
the current consensus on this question in Postgres community. As far as I
understand current state is the following:
1. We have per-connection prepared statements.
2. Queries executed inside plpgsql code are implicitly prepared.

It is not always possible or convenient to use prepared statements.
For example, if pgbouncer is used to perform connection pooling.
Another use case (which is actually the problem I am trying to solve now)
is partitioning.
Efficient execution of query to partitioned table requires hardcoded value
for partitioning key.
Only in this case optimizer will be able to construct efficient query plan
which access only affected tables (partitions).

My small benchmark for distributed partitioned table based on pg_pathman +
postgres_fdw shows 3 times degrade of performance in case of using prepared
statements.
But without prepared statements substantial amount of time is spent in
query compilation and planning. I was be able to speed up benchmark more
than two time by
sending prepared queries directly to the remote nodes.

I don't think it's correct to ask PostgreSQL hackers about problem which
arises with pg_pathman while pg_pathman is an extension supported by
Postgres Pro.
Since we have declarative partitioning committed to 10, I think that
community should address this issue in the context of declarative
partitioning.
However, it's unlikely we can spot this issue with declarative partitioning
because it still uses very inefficient constraint exclusion mechanism.
Thus, issues you are writing about would become visible on declarative
partitioning only when constraint exclusion would be replaced with
something more efficient.

Long story short, could you reproduce this issue without pg_pathman?

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#3Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Alexander Korotkov (#2)
Re: Cached plans and statement generalization

On 24.04.2017 13:24, Alexander Korotkov wrote:

Hi, Konstantin!

On Mon, Apr 24, 2017 at 11:46 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

There were a lot of discussions about query plan caching in
hackers mailing list, but I failed to find some clear answer for
my question and the current consensus on this question in Postgres
community. As far as I understand current state is the following:
1. We have per-connection prepared statements.
2. Queries executed inside plpgsql code are implicitly prepared.

It is not always possible or convenient to use prepared statements.
For example, if pgbouncer is used to perform connection pooling.
Another use case (which is actually the problem I am trying to
solve now) is partitioning.
Efficient execution of query to partitioned table requires
hardcoded value for partitioning key.
Only in this case optimizer will be able to construct efficient
query plan which access only affected tables (partitions).

My small benchmark for distributed partitioned table based on
pg_pathman + postgres_fdw shows 3 times degrade of performance in
case of using prepared statements.
But without prepared statements substantial amount of time is
spent in query compilation and planning. I was be able to speed up
benchmark more than two time by
sending prepared queries directly to the remote nodes.

I don't think it's correct to ask PostgreSQL hackers about problem
which arises with pg_pathman while pg_pathman is an extension
supported by Postgres Pro.
Since we have declarative partitioning committed to 10, I think that
community should address this issue in the context of declarative
partitioning.
However, it's unlikely we can spot this issue with declarative
partitioning because it still uses very inefficient constraint
exclusion mechanism. Thus, issues you are writing about would become
visible on declarative partitioning only when constraint exclusion
would be replaced with something more efficient.

Long story short, could you reproduce this issue without pg_pathman?

Sorry, I have mentioned pg_pathman just as example.
The same problems takes place with partitioning based on standard
Postgres inheritance mechanism (when I manually create derived tables
and specify constraints for them).
I didn't test yet declarative partitioning committed to 10, but I expect
the that it will also suffer from this problem (because is based on
inheritance).
But as I wrote, I think that the problem with plan caching is wider and
is not bounded just to partitioning.

------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
<http://www.postgrespro.com/&gt;
The Russian Postgres Company

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#4Andres Freund
andres@anarazel.de
In reply to: Konstantin Knizhnik (#1)
Re: Cached plans and statement generalization

Hi,

On 2017-04-24 11:46:02 +0300, Konstantin Knizhnik wrote:

So what I am thinking now is implicit query caching. If the same query with
different literal values is repeated many times, then we can try to
generalize this query and replace it with prepared query with
parameters.

That's not actuall all that easy:
- You pretty much do parse analysis to be able to do an accurate match.
How much overhead is parse analysis vs. planning in your cases?
- The invalidation infrastructure for this, if not tied to to fully
parse-analyzed statements, is going to be hell.
- Migrating to parameters can actually cause significant slowdowns, not
nice if that happens implicitly.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Andres Freund (#4)
1 attachment(s)
Re: Cached plans and statement generalization

On 24.04.2017 21:43, Andres Freund wrote:

Hi,

On 2017-04-24 11:46:02 +0300, Konstantin Knizhnik wrote:

So what I am thinking now is implicit query caching. If the same query with
different literal values is repeated many times, then we can try to
generalize this query and replace it with prepared query with
parameters.

That's not actuall all that easy:
- You pretty much do parse analysis to be able to do an accurate match.
How much overhead is parse analysis vs. planning in your cases?
- The invalidation infrastructure for this, if not tied to to fully
parse-analyzed statements, is going to be hell.
- Migrating to parameters can actually cause significant slowdowns, not
nice if that happens implicitly.

Well, first of all I want to share results I already get: pgbench with
default parameters, scale 10 and one connection:

protocol
TPS
simple
3492
extended
2927
prepared
6865
simple + autoprepare
6844

So autoprepare is as efficient as explicit prepare and can increase
performance almost two times.

My current implementation is replacing with parameters only string
literals in the query, i.e. select * from T where x='123'; -> select *
from T where x=$1;
It greatly simplifies matching of parameters - it is just necessary to
locate '\'' character and then correctly handle pairs of quotes.
Handling of integer and real literals is really challenged task.
One source of problems is negation: it is not so easy to correctly
understand whether minus should be treated as part of literal or as
operator:
(-1), (1-1), (1-1)-1
Another problem is caused by using integer literals in context where
parameters can not be used, for example "order by 1".

Fully correct substitution can be done by first performing parsing the
query, then transform parse tree, replacing literal nodes with parameter
nodes and finally deparse tree into generalized query. postgres_fdw
already contains such deparse code. It can be moved to postgres core and
reused for autoprepare (and may be somewhere else).
But in this case overhead will be much higher.
I still think that query parsing time is significantly smaller than time
needed for building and optimizing query execution plan.
But it should be measured if community will be interested in such approach.

There is obvious question: how I managed to get this pgbench results if
currently only substitution of string literals is supported and queries
constructed by pgbench don't contain string literals? I just made small
patch in pgbench replaceVariable method wrapping value's representation
in quotes. It has almost no impact on performance (3482 TPS vs. 3492 TPS),
but allows autoprepare to deal with pgbench queries.

I attached my patch to this mail. It is just first version of the patch
(based on REL9_6_STABLE branch) just to illustrate proposed approach.
I will be glad to receive any comments and if such optimization is
considered to be useful, I will continue work on this patch.

--

Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare.patchtext/x-patch; name=autoprepare.patchDownload
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f6be98b..6291d66 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -96,8 +96,6 @@ int			max_stack_depth = 100;
 /* wait N seconds to allow attach from a debugger */
 int			PostAuthDelay = 0;
 
-
-
 /* ----------------
  *		private variables
  * ----------------
@@ -188,6 +186,7 @@ static bool IsTransactionStmtList(List *parseTrees);
 static void drop_unnamed_stmt(void);
 static void SigHupHandler(SIGNAL_ARGS);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char *query_string);
 
 
 /* ----------------------------------------------------------------
@@ -916,6 +915,14 @@ exec_simple_query(const char *query_string)
 	drop_unnamed_stmt();
 
 	/*
+	 * Try to find cached plan
+	 */
+	if (autoprepare_threshold != 0 && exec_cached_query(query_string))
+	{
+		return;
+	}
+
+	/*
 	 * Switch to appropriate context for constructing parsetrees.
 	 */
 	oldcontext = MemoryContextSwitchTo(MessageContext);
@@ -4500,3 +4509,566 @@ log_disconnections(int code, Datum arg)
 					port->user_name, port->database_name, port->remote_host,
 				  port->remote_port[0] ? " port=" : "", port->remote_port)));
 }
+
+
+
+typedef struct { 
+	char const* query;
+	int64 exec_count;
+	CachedPlanSource* plan;	
+	int n_params;
+	int16 format;
+} plan_cache_entry;
+
+
+/*
+ * Replace string literals with parameters. We do not consider integer or real literals to avoid problems with 
+ * negative number, user defined operators, ... For example it is not easy to distinguish cases (-1), (1-1), (1-1)-1
+ */
+static void generalize_statement(const char *query_string, char** gen_query, char** query_params, int* n_params)
+{
+	size_t query_len = strlen(query_string);
+	char const* src = query_string;
+	char* dst;
+	char* params;
+	unsigned char ch;
+
+	*n_params = 0;
+
+	*gen_query = (char*)palloc(query_len*2); /* assume that we have less than 1000 parameters, the worst case is replacing '' with $999 */
+	*query_params = (char*)palloc(query_len + 1);
+	dst = *gen_query;
+	params = *query_params;
+
+	while ((ch = *src++) != '\0') { 
+		if (isspace(ch)) { 
+			/* Replace sequence of whitespaces with just one space */
+			while (*src && isspace(*(unsigned char*)src)) { 
+				src += 1;
+			}
+			*dst++ = ' ';
+		} else if (ch == '\'') { 
+			while (true) { 
+				ch = *src++;				
+				if (ch == '\'') { 
+					if (*src != '\'') { 
+						break;
+					} else {
+						/* escaped quote */
+						*params++ = '\'';
+						src += 1;
+					}
+				} else { 
+					*params++ = ch;
+				}
+			}
+			*params++ = '\0';
+			dst += sprintf(dst, "$%d", ++*n_params);
+		} else { 
+			*dst++ = ch;
+		}
+	}			
+	Assert(dst <= *gen_query + query_len);
+	Assert(params <= *query_params + query_len*2);
+	*dst = '\0';
+}
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return string_hash(((plan_cache_entry*)key)->query, 0);
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	return strcmp(((plan_cache_entry*)key1)->query, ((plan_cache_entry*)key2)->query);
+}
+
+static void* plan_cache_keycopy_fn(void *dest, const void *src, Size keysize)
+{ 
+	((plan_cache_entry*)dest)->query = pstrdup(((plan_cache_entry*)src)->query);
+    return dest;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool exec_cached_query(const char *query_string)
+{
+	CommandDest       dest = whereToSendOutput;
+	DestReceiver     *receiver;
+	char             *gen_query;
+	char             *query_params;
+	int               n_params;
+	plan_cache_entry *entry;
+	bool              found;
+	MemoryContext     old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo     params;
+	int               paramno;
+	CachedPlan       *cplan;
+	Portal		      portal;
+	bool		      was_logged = false;
+	bool		      is_xact_command;
+	bool		      execute_is_fetch;
+	char		      completion_tag[COMPLETION_TAG_BUFSIZE];
+	bool	 	      save_log_statement_stats = log_statement_stats;
+	ParamListInfo     portal_params;
+	const char       *source_text;
+	char		      msec_str[32];
+	bool		      snapshot_set = false;
+		
+	static HTAB*      plan_cache;
+	static MemoryContext plan_cache_context;
+
+	/* 
+	 * Extract literals from query
+	 */
+	generalize_statement(query_string, &gen_query, &query_params, &n_params);
+
+	if (plan_cache_context == NULL) { 
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/* 
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache == NULL) 
+	{ 
+		static HASHCTL info;
+		info.keysize = sizeof(char*);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		info.keycopy = plan_cache_keycopy_fn;	
+		plan_cache = hash_create("plan_cache", PLAN_CACHE_SIZE, &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);
+	}
+	
+	/*
+	 * Lookup generalized query 
+	 */
+	entry = (plan_cache_entry*)hash_search(plan_cache, &gen_query, HASH_ENTER, &found);
+	if (!found) { 
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->n_params = n_params;
+	} else {
+		Assert(entry->n_params == n_params);
+	}
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Prepare query only when it is executed more than autoprepare_threshold times
+	 */
+	if (entry->exec_count++ < autoprepare_threshold) { 
+		return false;
+	}
+	if (entry->plan == NULL) { 
+		List       *parsetree_list;
+		Node       *raw_parse_tree;
+		const char *command_tag;
+		Query	   *query;
+		List	   *querytree_list;
+		Oid        *param_types = NULL;
+		int         num_params = 0;
+
+		old_context = MemoryContextSwitchTo(MessageContext);
+		
+		parsetree_list = pg_parse_query(gen_query);
+		
+		/*
+		 * Only single user statement are allowed in a prepared statement.
+		 */
+		if (list_length(parsetree_list) != 1) {
+			MemoryContextSwitchTo(old_context);
+			return false;
+		}
+
+		raw_parse_tree = (Node *) linitial(parsetree_list);
+	   
+		/*
+		 * Get the command name for possible use in status display.
+		 */
+		command_tag = CreateCommandTag(raw_parse_tree);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ROLLBACK.  It is important that this test occur before we
+		 * try to do parse analysis, rewrite, or planning, since all those
+		 * phases try to do database accesses, which may fail in abort state.
+		 * (It might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, gen_query, command_tag);
+
+		/*
+		 * Set up a snapshot if parse analysis will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		/*
+		 * Analyze and rewrite the query.  Note that the originally specified
+		 * parameter set is not required to be complete, so we have to use
+		 * parse_analyze_varparams().
+		 */
+		if (log_parser_stats) {
+			ResetUsage();
+		}
+
+		query = parse_analyze_varparams(raw_parse_tree,
+										gen_query,
+										&param_types,
+										&num_params);
+		Assert(num_params == entry->n_params);
+
+		/*
+		 * Check all parameter types got determined.
+		 */
+		for (paramno = 0; paramno < n_params; paramno++)
+		{
+			Oid			ptype = param_types[paramno];
+
+			if (ptype == InvalidOid || ptype == UNKNOWNOID) {
+				/* Type of parameter can not be determined */
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+		}
+		
+		if (log_parser_stats) {
+			ShowUsage("PARSE ANALYSIS STATISTICS");
+		}
+
+		querytree_list = pg_rewrite_query(query);
+
+		/* Done with the snapshot used for parsing */
+		if (snapshot_set) {
+			PopActiveSnapshot();
+		}
+
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+
+		SaveCachedPlan(psrc);
+		entry->plan = psrc;
+		MemoryContextSwitchTo(old_context);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+	}
+	psrc = entry->plan;
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.  We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.  We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
+		
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	} else { 
+		snapshot_set = false;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		params = (ParamListInfo) palloc(offsetof(ParamListInfoData, params) +
+										n_params * sizeof(ParamExternData));
+		params->paramFetch = NULL;
+		params->paramFetchArg = NULL;
+		params->parserSetup = NULL;
+		params->parserSetupArg = NULL;
+		params->numParams = n_params;
+		params->paramMask = NULL;
+		
+		for (paramno = 0; paramno < n_params; paramno++)
+		{
+			Oid			ptype = psrc->param_types[paramno];
+			Oid			typinput;
+			Oid			typioparam;
+			
+			getTypeInputInfo(ptype, &typinput, &typioparam);
+			
+			params->params[paramno].value = OidInputFunctionCall(typinput, query_params, typioparam, -1);
+			params->params[paramno].isnull = false;
+
+			/*
+			 * We mark the params as CONST.  This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+
+			query_params += strlen(query_params) + 1;
+		}
+	} else { 
+		params = NULL;
+	}
+		
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.  Any cruft from (re)planning
+	 * will be generated in MessageContext.  The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false);
+	
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  gen_query,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set) {
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/* Does the portal contain a transaction command? */
+	is_xact_command = IsTransactionStmtList(portal->stmts);
+
+	/*
+	 * We must copy the sourceText into MessageContext in
+	 * case the portal is destroyed during finish_xact_command. Can avoid the
+	 * copy if it's not an xact command, though.
+	 */
+	if (is_xact_command)
+	{
+		source_text = pstrdup(portal->sourceText);
+		/*
+		 * An xact command shouldn't have any parameters, which is a good
+		 * thing because they wouldn't be around after finish_xact_command.
+		 */
+		portal_params = NULL;
+	}
+	else
+	{
+		source_text = portal->sourceText;
+		portal_params = portal->portalParams;
+	}
+
+	/*
+	 * Report query to various monitoring facilities.
+	 */
+	debug_query_string = source_text;
+
+	pgstat_report_activity(STATE_RUNNING, source_text);
+
+	set_ps_display(portal->commandTag, false);
+
+	if (save_log_statement_stats) {
+		ResetUsage();
+	}
+
+	BeginCommand(portal->commandTag, dest);
+
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+
+	/*
+	 * Create dest receiver in MessageContext (we don't want it in transaction
+	 * context, because that may get deleted if portal contains VACUUM).
+	 */
+	receiver = CreateDestReceiver(dest);
+	SetRemoteDestReceiverParams(receiver, portal);
+
+	/*
+	 * If we re-issue an Execute protocol request against an existing portal,
+	 * then we are only fetching more rows rather than completely re-executing
+	 * the query from the start. atStart is never reset for a v3 portal, so we
+	 * are safe to use this check.
+	 */
+	execute_is_fetch = !portal->atStart;
+
+	/* Log immediately if dictated by log_statement */
+	if (check_log_statement(portal->stmts))
+	{
+		ereport(LOG,
+				(errmsg("%s %s%s%s: %s",
+						execute_is_fetch ?
+						_("execute fetch from") :
+						_("execute"),
+						"<unnamed>",
+						"",
+						"",
+						source_text),
+				 errhidestmt(true),
+				 errdetail_params(portal_params)));
+		was_logged = true;
+	}
+
+	/* Check for cancel signal before we start execution */
+	CHECK_FOR_INTERRUPTS();
+
+	/*
+	 * Run the portal to completion, and then drop it (and the receiver).
+	 */
+	(void) PortalRun(portal,
+					 FETCH_ALL,
+					 true,
+					 receiver,
+					 receiver,
+					 completion_tag);
+
+	(*receiver->rDestroy) (receiver);
+
+	PortalDrop(portal, false);
+
+	/*
+	 * Tell client that we're done with this query.  Note we emit exactly
+	 * one EndCommand report for each raw parsetree, thus one for each SQL
+	 * command the client sent, regardless of rewriting. (But a command
+	 * aborted by error will not send an EndCommand report at all.)
+	 */
+	EndCommand(completion_tag, dest);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	/*
+	 * Emit duration logging if appropriate.
+	 */
+	switch (check_log_duration(msec_str, was_logged))
+	{
+		case 1:
+			ereport(LOG,
+					(errmsg("duration: %s ms", msec_str),
+					 errhidestmt(true)));
+			break;
+		case 2:
+			ereport(LOG,
+					(errmsg("duration: %s ms  %s %s%s%s: %s",
+							msec_str,
+							execute_is_fetch ?
+							_("execute fetch from") :
+							_("execute"),
+							"<unnamed>",
+							"",
+							"",
+							source_text),
+					 errhidestmt(true),
+					 errdetail_params(portal_params)));
+			break;
+	}
+
+	if (save_log_statement_stats) {
+		ShowUsage("EXECUTE MESSAGE STATISTICS");
+	}
+	debug_query_string = NULL;
+
+	return true;
+}
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 4f1891f..d3acf5b 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -450,6 +450,9 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -1949,6 +1952,19 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 NULL,
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 0bf9f21..ef23569 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -252,6 +252,8 @@ extern int	client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
#6Serge Rielau
serge@rielau.com
In reply to: Konstantin Knizhnik (#5)
Re: Cached plans and statement generalization

On Apr 25, 2017, at 8:11 AM, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:
Another problem is caused by using integer literals in context where parameters can not be used, for example "order by 1”.

You will also need to deal with modifiers in types such as VARCHAR(10). Not sure if there are specific functions which can only deal with literals (?) as well.

Doug Doole did this work in DB2 LUW and he may be able to point to more places to watch out for semantically.

Generally, in my experience, this feature is very valuable when dealing with (poorly designed) web apps that just glue together strings.
Protecting it under a GUC would allow to only do the work if it’s deemed likely to help.
Another rule I find useful is to abort any efforts to substitute literals if any bind variable is found in the query.
That can be used as a cue that the author of the SQL left the remaining literals in on purpose.

A follow up feature would be to formalize different flavors of peeking.
I.e. can you produce a generic plan, but still recruit the initial set of bind values/substituted literals to dos costing?

Cheers
Serge Rielau
Salesforce.com <http://salesforce.com/&gt;

PS: FWIW, I like this feature.

#7Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Serge Rielau (#6)
Re: Cached plans and statement generalization

On 25.04.2017 19:12, Serge Rielau wrote:

On Apr 25, 2017, at 8:11 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:
Another problem is caused by using integer literals in context where
parameters can not be used, for example "order by 1�.

You will also need to deal with modifiers in types such as
VARCHAR(10). Not sure if there are specific functions which can only
deal with literals (?) as well.

Sorry, I do not completely understand how presence of type modifiers can
affect string literals used in query.
Can you provide me some example?

Doug Doole did this work in DB2 LUW and he may be able to point to
more places to watch out for semantically.

Generally, in my experience, this feature is very valuable when
dealing with (poorly designed) web apps that just glue together strings.

I do not think that this optimization will be useful only for poorly
designed application.
I already pointed on two use cases where prepapred statements can not be
used:
1. pgbouncer without session-level pooling.
2. partitioning

Protecting it under a GUC would allow to only do the work if it�s
deemed likely to help.
Another rule I find useful is to abort any efforts to substitute
literals if any bind variable is found in the query.
That can be used as a cue that the author of the SQL left the
remaining literals in on purpose.

A follow up feature would be to formalize different flavors of peeking.
I.e. can you produce a generic plan, but still recruit the initial set
of bind values/substituted literals to dos costing?

Here situation is the same as for explicitly prepared statements, isn't it?
Sometimes it is preferrable to use specialized plan rather than generic
plan.
I am not sure if postgres now is able to do it.

Cheers
Serge Rielau
Salesforce.com <http://salesforce.com&gt;

PS: FWIW, I like this feature.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#8David Fetter
david@fetter.org
In reply to: Konstantin Knizhnik (#5)
Re: Cached plans and statement generalization

On Tue, Apr 25, 2017 at 06:11:09PM +0300, Konstantin Knizhnik wrote:

On 24.04.2017 21:43, Andres Freund wrote:

Hi,

On 2017-04-24 11:46:02 +0300, Konstantin Knizhnik wrote:

So what I am thinking now is implicit query caching. If the same query with
different literal values is repeated many times, then we can try to
generalize this query and replace it with prepared query with
parameters.

That's not actuall all that easy:
- You pretty much do parse analysis to be able to do an accurate match.
How much overhead is parse analysis vs. planning in your cases?
- The invalidation infrastructure for this, if not tied to to fully
parse-analyzed statements, is going to be hell.
- Migrating to parameters can actually cause significant slowdowns, not
nice if that happens implicitly.

Well, first of all I want to share results I already get: pgbench with
default parameters, scale 10 and one connection:

protocol
TPS
simple
3492
extended
2927
prepared
6865
simple + autoprepare
6844

If this is string mashing on the unparsed query, as it appears to be,
it's going to be a perennial source of security issues.

Best,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Serge Rielau
serge@rielau.com
In reply to: Konstantin Knizhnik (#7)
Re: Cached plans and statement generalization

On Tue, Apr 25, 2017 at 9:45 AM, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote: On 25.04.2017 19:12, Serge Rielau wrote:

On Apr 25, 2017, at 8:11 AM, Konstantin Knizhnik < k.knizhnik@postgrespro.ru [k.knizhnik@postgrespro.ru] > wrote: Another problem is caused by using integer literals in context where parameters can not be used, for example "order by 1”.
You will also need to deal with modifiers in types such as VARCHAR(10). Not sure if there are specific functions which can only deal with literals (?) as well. Sorry, I do not completely understand how presence of type modifiers can affect string literals used in query.
Can you provide me some example? SELECT ‘hello’::CHAR(10) || ‘World’, 5 + 6;
You can substitute ‘hello’, ‘World’, 5, and 6. But not 10.
Also some OLAP syntax like “rows preceding”
It pretty much boils down to whether you can do some shallow parsing rather than expending the effort to build the parse tree.
Cheers Serge

#10Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: David Fetter (#8)
Re: Cached plans and statement generalization

On 04/25/2017 07:54 PM, David Fetter wrote:

On Tue, Apr 25, 2017 at 06:11:09PM +0300, Konstantin Knizhnik wrote:

On 24.04.2017 21:43, Andres Freund wrote:

Hi,

On 2017-04-24 11:46:02 +0300, Konstantin Knizhnik wrote:

So what I am thinking now is implicit query caching. If the same query with
different literal values is repeated many times, then we can try to
generalize this query and replace it with prepared query with
parameters.

That's not actuall all that easy:
- You pretty much do parse analysis to be able to do an accurate match.
How much overhead is parse analysis vs. planning in your cases?
- The invalidation infrastructure for this, if not tied to to fully
parse-analyzed statements, is going to be hell.
- Migrating to parameters can actually cause significant slowdowns, not
nice if that happens implicitly.

Well, first of all I want to share results I already get: pgbench with
default parameters, scale 10 and one connection:

protocol
TPS
simple
3492
extended
2927
prepared
6865
simple + autoprepare
6844

If this is string mashing on the unparsed query, as it appears to be,
it's going to be a perennial source of security issues.

Sorry, may be I missed something, but I can not understand how security can be violated by extracting string literals from query. I am just copying bytes from one buffer to another. I do not try to somehow interpret this parameters. What I am doing is
very similar with standard prepared statements.
And moreover query is parsed! Only query which was already parsed and executed (but with different values of parameters) can be autoprepared.

Best,
David.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Serge Rielau (#9)
Re: Cached plans and statement generalization

On 04/25/2017 08:09 PM, Serge Rielau wrote:

On Tue, Apr 25, 2017 at 9:45 AM, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:

On 25.04.2017 19:12, Serge Rielau wrote:

On Apr 25, 2017, at 8:11 AM, Konstantin Knizhnik <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:
Another problem is caused by using integer literals in context where parameters can not be used, for example "order by 1”.

You will also need to deal with modifiers in types such as VARCHAR(10). Not sure if there are specific functions which can only deal with literals (?) as well.

Sorry, I do not completely understand how presence of type modifiers can affect string literals used in query.
Can you provide me some example?

SELECT ‘hello’::CHAR(10) || ‘World’, 5 + 6;

You can substitute ‘hello’, ‘World’, 5, and 6. But not 10.

I am substituting only string literals. So the query above will be transformed to

SELECT $1::CHAR(10) || $2, 5 + 6;

What's wrong with it?

Also some OLAP syntax like “rows preceding”

It pretty much boils down to whether you can do some shallow parsing rather than expending the effort to build the parse tree.

Cheers
Serge

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#12Serge Rielau
serge@rielau.com
In reply to: Konstantin Knizhnik (#11)
Re: Cached plans and statement generalization

On Apr 25, 2017, at 1:37 PM, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:

SELECT ‘hello’::CHAR(10) || ‘World’, 5 + 6;

You can substitute ‘hello’, ‘World’, 5, and 6. But not 10.

I am substituting only string literals. So the query above will be transformed to

SELECT $1::CHAR(10) || $2, 5 + 6;

What's wrong with it?

Oh, well that leaves a lot of opportunities on the table, doesn’t it?

Cheers
Serge

#13Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Serge Rielau (#12)
Re: Cached plans and statement generalization

On 04/25/2017 11:40 PM, Serge Rielau wrote:

On Apr 25, 2017, at 1:37 PM, Konstantin Knizhnik <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

SELECT ‘hello’::CHAR(10) || ‘World’, 5 + 6;

You can substitute ‘hello’, ‘World’, 5, and 6. But not 10.

I am substituting only string literals. So the query above will be transformed to

SELECT $1::CHAR(10) || $2, 5 + 6;

What's wrong with it?

Oh, well that leaves a lot of opportunities on the table, doesn’t it?

Well, actually my primary intention was not to make badly designed programs (not using prepared statements) work faster.
I wanted to address cases when it is not possible to use prepared statements.
If we want to substitute with parameters as much literals as possible, then parse+deparse tree seems to be the only reasonable approach.
I will try to implement it also, just to estimate parsing overhead.

Cheers
Serge

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#14Doug Doole
ddoole@salesforce.com
In reply to: Konstantin Knizhnik (#13)
Re: Cached plans and statement generalization

When I did this in DB2, I didn't use the parser - it was too expensive. I
just tokenized the statement and used some simple rules to bypass the
invalid cases. For example, if I saw the tokens "ORDER" and "BY" then I'd
disallow replacement replacement until I hit the end of the current
subquery or statement.

There are a few limitations to this approach. For example, DB2 allowed you
to cast using function notation like VARCHAR(foo, 10). This meant I would
never replace the second parameter of any VARCHAR function. Now it's
possible that when the statement was fully compiled we'd find that
VARCHAR(foo,10) actually resolved to BOB.VARCHAR() instead of the built-in
cast function. Our thinking that these cases were rare enough that we
wouldn't worry about them. (Of course, PostgreSQL's ::VARCHAR(10) syntax
avoids this problem completely.)

Because SQL is so structured, the implementation ended up being quite
simple (a few hundred line of code) with no significant maintenance issues.
(Other developers had no problem adding in new cases where constants had to
be preserved.)

The simple tokenizer was also fairly extensible. I'd prototyped using the
same code to also normalize statements (uppercase all keywords, collapse
whitespace to a single blank, etc.) but that feature was never added to the
product.

- Doug

On Tue, Apr 25, 2017 at 1:47 PM Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

Show quoted text

On 04/25/2017 11:40 PM, Serge Rielau wrote:

On Apr 25, 2017, at 1:37 PM, Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

SELECT ‘hello’::CHAR(10) || ‘World’, 5 + 6;

You can substitute ‘hello’, ‘World’, 5, and 6. But not 10.

I am substituting only string literals. So the query above will be
transformed to

SELECT $1::CHAR(10) || $2, 5 + 6;

What's wrong with it?

Oh, well that leaves a lot of opportunities on the table, doesn’t it?

Well, actually my primary intention was not to make badly designed
programs (not using prepared statements) work faster.
I wanted to address cases when it is not possible to use prepared
statements.
If we want to substitute with parameters as much literals as possible,
then parse+deparse tree seems to be the only reasonable approach.
I will try to implement it also, just to estimate parsing overhead.

Cheers
Serge

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#15Andres Freund
andres@anarazel.de
In reply to: Doug Doole (#14)
Re: Cached plans and statement generalization

On 2017-04-25 21:11:08 +0000, Doug Doole wrote:

When I did this in DB2, I didn't use the parser - it was too expensive. I
just tokenized the statement and used some simple rules to bypass the
invalid cases. For example, if I saw the tokens "ORDER" and "BY" then I'd
disallow replacement replacement until I hit the end of the current
subquery or statement.

How did you manage plan invalidation and such?

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#16Doug Doole
ddoole@salesforce.com
In reply to: Andres Freund (#15)
Re: Cached plans and statement generalization

Plan invalidation was no different than for any SQL statement. DB2 keeps a
list of the objects the statement depends on. If any of the objects changes
in an incompatible way the plan is invalidated and kicked out of the cache.

I suspect what is more interesting is plan lookup. DB2 has something called
the "compilation environment". This is a collection of everything that
impacts how a statement is compiled (SQL path, optimization level, etc.).
Plan lookup is done using both the statement text and the compilation
environment. So, for example, if my path is DOUG, MYTEAM, SYSIBM and your
path is ANDRES, MYTEAM, SYSIBM we will have different compilation
environments. If we both issue "SELECT * FROM T" we'll end up with
different cache entries even if T in both of our statements resolves to
MYTEAM.T. If I execute "SELECT * FROM T", change my SQL path and then
execute "SELECT * FROM T" again, I have a new compilation environment so
the second invocation of the statement will create a new entry in the
cache. The first entry is not kicked out - it will still be there for
re-use if I change my SQL path back to my original value (modulo LRU for
cache memory management of course).

With literal replacement, the cache entry is on the modified statement
text. Given the modified statement text and the compilation environment,
you're guaranteed to get the right plan entry.

On Tue, Apr 25, 2017 at 2:47 PM Andres Freund <andres@anarazel.de> wrote:

Show quoted text

On 2017-04-25 21:11:08 +0000, Doug Doole wrote:

When I did this in DB2, I didn't use the parser - it was too expensive. I
just tokenized the statement and used some simple rules to bypass the
invalid cases. For example, if I saw the tokens "ORDER" and "BY" then I'd
disallow replacement replacement until I hit the end of the current
subquery or statement.

How did you manage plan invalidation and such?

- Andres

#17David Fetter
david@fetter.org
In reply to: Konstantin Knizhnik (#10)
Re: Cached plans and statement generalization

On Tue, Apr 25, 2017 at 11:35:21PM +0300, Konstantin Knizhnik wrote:

On 04/25/2017 07:54 PM, David Fetter wrote:

On Tue, Apr 25, 2017 at 06:11:09PM +0300, Konstantin Knizhnik wrote:

On 24.04.2017 21:43, Andres Freund wrote:

Hi,

On 2017-04-24 11:46:02 +0300, Konstantin Knizhnik wrote:

So what I am thinking now is implicit query caching. If the same query with
different literal values is repeated many times, then we can try to
generalize this query and replace it with prepared query with
parameters.

That's not actuall all that easy:
- You pretty much do parse analysis to be able to do an accurate match.
How much overhead is parse analysis vs. planning in your cases?
- The invalidation infrastructure for this, if not tied to to fully
parse-analyzed statements, is going to be hell.
- Migrating to parameters can actually cause significant slowdowns, not
nice if that happens implicitly.

Well, first of all I want to share results I already get: pgbench with
default parameters, scale 10 and one connection:

protocol
TPS
simple
3492
extended
2927
prepared
6865
simple + autoprepare
6844

If this is string mashing on the unparsed query, as it appears to be,
it's going to be a perennial source of security issues.

Sorry, may be I missed something, but I can not understand how
security can be violated by extracting string literals from query. I
am just copying bytes from one buffer to another. I do not try to
somehow interpret this parameters. What I am doing is very similar
with standard prepared statements. And moreover query is parsed!
Only query which was already parsed and executed (but with different
values of parameters) can be autoprepared.

I don't have an exploit yet. What concerns me is attackers' access to
what is in essence the ability to poke at RULEs when they only have
privileges to read.

Best,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#18Andres Freund
andres@anarazel.de
In reply to: Doug Doole (#16)
Re: Cached plans and statement generalization

Hi,

(FWIW, on this list we don't do top-quotes)

On 2017-04-25 22:21:22 +0000, Doug Doole wrote:

Plan invalidation was no different than for any SQL statement. DB2 keeps a
list of the objects the statement depends on. If any of the objects changes
in an incompatible way the plan is invalidated and kicked out of the cache.

I suspect what is more interesting is plan lookup. DB2 has something called
the "compilation environment". This is a collection of everything that
impacts how a statement is compiled (SQL path, optimization level, etc.).
Plan lookup is done using both the statement text and the compilation
environment. So, for example, if my path is DOUG, MYTEAM, SYSIBM and your
path is ANDRES, MYTEAM, SYSIBM we will have different compilation
environments. If we both issue "SELECT * FROM T" we'll end up with
different cache entries even if T in both of our statements resolves to
MYTEAM.T. If I execute "SELECT * FROM T", change my SQL path and then
execute "SELECT * FROM T" again, I have a new compilation environment so
the second invocation of the statement will create a new entry in the
cache. The first entry is not kicked out - it will still be there for
re-use if I change my SQL path back to my original value (modulo LRU for
cache memory management of course).

It's not always that simple, at least in postgres, unless you disregard
search_path. Consider e.g. cases like

CREATE SCHEMA a;
CREATE SCHEMA b;
CREATE TABLE a.foobar(somecol int);
SET search_patch = 'b,a';
SELECT * FROM foobar;
CREATE TABLE b.foobar(anothercol int);
SELECT * FROM foobar; -- may not be cached plan from before!

it sounds - my memory of DB2 is very faint, and I never used it much -
like similar issues could arise in DB2 too?

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#19David G. Johnston
david.g.johnston@gmail.com
In reply to: David Fetter (#17)
Re: Cached plans and statement generalization

On Tue, Apr 25, 2017 at 3:24 PM, David Fetter <david@fetter.org> wrote:

I don't have an exploit yet. What concerns me is attackers' access to
what is in essence the ability to poke at RULEs when they only have
privileges to read.

​If they want to see how it works they can read the source code. In terms
of runtime data it would limited to whatever the session itself created.
In most cases the presence of the cache would be invisible. I suppose it
might appear if one were to explain a query, reset the session, explain
another query and then re-explain the original. If the chosen plan in the
second pass differed because of the presence of the leading query it would
be noticeable but not revealing. Albeit I'm a far cry from a security
expert...

David J.

#20Doug Doole
ddoole@salesforce.com
In reply to: Andres Freund (#18)
Re: Cached plans and statement generalization

(FWIW, on this list we don't do top-quotes)

I know. Forgot and just did "reply all". My bad.

It's not always that simple, at least in postgres, unless you disregard

search_path. Consider e.g. cases like

CREATE SCHEMA a;
CREATE SCHEMA b;
CREATE TABLE a.foobar(somecol int);
SET search_patch = 'b,a';
SELECT * FROM foobar;
CREATE TABLE b.foobar(anothercol int);
SELECT * FROM foobar; -- may not be cached plan from before!

it sounds - my memory of DB2 is very faint, and I never used it much -
like similar issues could arise in DB2 too?

DB2 does handle this case. Unfortunately I don't know the details of how it
worked though.

A naive option would be to invalidate anything that depends on table or
view *.FOOBAR. You could probably make it a bit smarter by also requiring
that schema A appear in the path.

- Doug

#21Finnerty, Jim
jfinnert@amazon.com
In reply to: Andres Freund (#18)
Re: Cached plans and statement generalization

On 4/25/17, 6:34 PM, "pgsql-hackers-owner@postgresql.org on behalf of Andres Freund" <pgsql-hackers-owner@postgresql.org on behalf of andres@anarazel.de> wrote:

It's not always that simple, at least in postgres, unless you disregard
search_path. Consider e.g. cases like

CREATE SCHEMA a;
CREATE SCHEMA b;
CREATE TABLE a.foobar(somecol int);
SET search_patch = 'b,a';
SELECT * FROM foobar;
CREATE TABLE b.foobar(anothercol int);
SELECT * FROM foobar; -- may not be cached plan from before!

it sounds - my memory of DB2 is very faint, and I never used it much -
like similar issues could arise in DB2 too?

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

You may need to store the reloid of each relation in the cached query to ensure that the schema bindings are the same, and also invalidate dependent cache entries if a referenced relation is dropped.

Regards,

Jim Finnerty

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#22Serge Rielau
serge@rielau.com
In reply to: Doug Doole (#20)
Re: Cached plans and statement generalization

via Newton Mail [https://cloudmagic.com/k/d/mailapp?ct=dx&amp;cv=9.4.52&amp;pv=10.11.6&amp;source=email_footer_2]
On Tue, Apr 25, 2017 at 3:48 PM, Doug Doole <ddoole@salesforce.com> wrote: It's not always that simple, at least in postgres, unless you disregard
search_path. Consider e.g. cases like

CREATE SCHEMA a;
CREATE SCHEMA b;
CREATE TABLE a.foobar(somecol int);
SET search_patch = 'b,a';
SELECT * FROM foobar;
CREATE TABLE b.foobar(anothercol int);
SELECT * FROM foobar; -- may not be cached plan from before!

it sounds - my memory of DB2 is very faint, and I never used it much -
like similar issues could arise in DB2 too?

DB2 does handle this case. Unfortunately I don't know the details of how it worked though.
A naive option would be to invalidate anything that depends on table or view *.FOOBAR. You could probably make it a bit smarter by also requiring that schema A appear in the path. While this specific scenario does not arise in DB2 since it uses CURRENT SCHEMA only for tables (much to my dislike) your examples holds for functions and types which are resolved by path. For encapsulated SQL (in views, functions) conservative semantics are enforced via including the timestamp. For dynamic SQL the problem you describe does exist though and I think it is handled in the way Doug describes. However, as noted by Doug the topic of plan invalidation is really orthogonal to normalizing the queries. All it does is provide more opportunities to run into any pre-existing bugs.
Cheers Serge
PS: I’m just starting to look at the plan invalidation code in PG because we are dealing with potentially 10s of thousands of cached SQL statements. So these complete wipe outs or walks of every plan in the cache don’t scale.

#23Tsunakawa, Takayuki
tsunakawa.takay@jp.fujitsu.com
In reply to: Konstantin Knizhnik (#5)
Re: Cached plans and statement generalization

From: pgsql-hackers-owner@postgresql.org

[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Konstantin
Knizhnik
Well, first of all I want to share results I already get: pgbench with default
parameters, scale 10 and one connection:

So autoprepare is as efficient as explicit prepare and can increase
performance almost two times.

This sounds great.

BTW, when are the autoprepared statements destroyed? Are you considering some upper limit on the number of prepared statements?

Regards
Takayuki Tsunakawa

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#24Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Andres Freund (#15)
Re: Cached plans and statement generalization

On 26.04.2017 00:47, Andres Freund wrote:

On 2017-04-25 21:11:08 +0000, Doug Doole wrote:

When I did this in DB2, I didn't use the parser - it was too expensive. I
just tokenized the statement and used some simple rules to bypass the
invalid cases. For example, if I saw the tokens "ORDER" and "BY" then I'd
disallow replacement replacement until I hit the end of the current
subquery or statement.

How did you manage plan invalidation and such?

The same mechanism as for prepared statements.
Cached plans are linked in the list by SaveCachedPlan function and are
invalidated by PlanCacheRelCallback.

- Andres

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#25Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Andres Freund (#18)
Re: Cached plans and statement generalization

On 26.04.2017 01:34, Andres Freund wrote:

Hi,

(FWIW, on this list we don't do top-quotes)

On 2017-04-25 22:21:22 +0000, Doug Doole wrote:

Plan invalidation was no different than for any SQL statement. DB2 keeps a
list of the objects the statement depends on. If any of the objects changes
in an incompatible way the plan is invalidated and kicked out of the cache.

I suspect what is more interesting is plan lookup. DB2 has something called
the "compilation environment". This is a collection of everything that
impacts how a statement is compiled (SQL path, optimization level, etc.).
Plan lookup is done using both the statement text and the compilation
environment. So, for example, if my path is DOUG, MYTEAM, SYSIBM and your
path is ANDRES, MYTEAM, SYSIBM we will have different compilation
environments. If we both issue "SELECT * FROM T" we'll end up with
different cache entries even if T in both of our statements resolves to
MYTEAM.T. If I execute "SELECT * FROM T", change my SQL path and then
execute "SELECT * FROM T" again, I have a new compilation environment so
the second invocation of the statement will create a new entry in the
cache. The first entry is not kicked out - it will still be there for
re-use if I change my SQL path back to my original value (modulo LRU for
cache memory management of course).

It's not always that simple, at least in postgres, unless you disregard
search_path. Consider e.g. cases like

CREATE SCHEMA a;
CREATE SCHEMA b;
CREATE TABLE a.foobar(somecol int);
SET search_patch = 'b,a';
SELECT * FROM foobar;
CREATE TABLE b.foobar(anothercol int);
SELECT * FROM foobar; -- may not be cached plan from before!

it sounds - my memory of DB2 is very faint, and I never used it much -
like similar issues could arise in DB2 too?

There is the same problem with explicitly prepared statements, isn't it?
Certainly in case of using prepared statements it is responsibility of
programmer to avoid such collisions.
And in case of autoprepare programmer it is hidden from programming.
But there is guc variable controlling autoprepare feature and by default
it is switched off.
So if programmer or DBA enables it, then them should take in account
effects of such decision.

By the way, isn't it a bug in PostgreSQL that altering search path is
not invalidating cached plans?
As I already mentioned, the same problem can be reproduced with
explicitly prepared statements.

Greetings,

Andres Freund

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#26Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tsunakawa, Takayuki (#23)
Re: Cached plans and statement generalization

On 26.04.2017 04:00, Tsunakawa, Takayuki wrote:

From: pgsql-hackers-owner@postgresql.org

[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Konstantin
Knizhnik
Well, first of all I want to share results I already get: pgbench with default
parameters, scale 10 and one connection:

So autoprepare is as efficient as explicit prepare and can increase
performance almost two times.

This sounds great.

BTW, when are the autoprepared statements destroyed?

Right now them are destroyed only in case of receiving invalidation
message (when catalog is changed).
Prepared statements are local to backend and are located in backend's
memory.
It is unlikely, that there will be too much different queries which
cause memory overflow.
But in theory such situation is certainly possible.

Are you considering some upper limit on the number of prepared statements?

In this case we need some kind of LRU for maintaining cache of
autoprepared statements.
I think that it is good idea to have such limited cached - it can avoid
memory overflow problem.
I will try to implement it.

Regards
Takayuki Tsunakawa

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#27Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#26)
1 attachment(s)
Re: Cached plans and statement generalization

On 26.04.2017 10:49, Konstantin Knizhnik wrote:

On 26.04.2017 04:00, Tsunakawa, Takayuki wrote: Are you considering
some upper limit on the number of prepared statements?
In this case we need some kind of LRU for maintaining cache of
autoprepared statements.
I think that it is good idea to have such limited cached - it can
avoid memory overflow problem.
I will try to implement it.

I attach new patch which allows to limit the number of autoprepared
statements (autoprepare_limit GUC variable).
Also I did more measurements, now with several concurrent connections
and read-only statements.
Results of pgbench with 10 connections, scale 10 and read-only
statements are below:

Protocol
TPS
extended
87k
prepared
209k
simple+autoprepare
206k

As you can see, autoprepare provides more than 2 times speed improvement.

Also I tried to measure overhead of parsing (to be able to substitute
all literals, not only string literals).
I just added extra call of pg_parse_query. Speed is reduced to 181k.
So overhead is noticeable, but still making such optimization useful.
This is why I want to ask question: is it better to implement slower
but safer and more universal solution?

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare.patchtext/x-patch; name=autoprepare.patchDownload
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f6be98b..0c9abfc 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -188,6 +188,7 @@ static bool IsTransactionStmtList(List *parseTrees);
 static void drop_unnamed_stmt(void);
 static void SigHupHandler(SIGNAL_ARGS);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char *query_string);
 
 
 /* ----------------------------------------------------------------
@@ -916,6 +917,14 @@ exec_simple_query(const char *query_string)
 	drop_unnamed_stmt();
 
 	/*
+	 * Try to find cached plan
+	 */
+	if (autoprepare_threshold != 0 && exec_cached_query(query_string))
+	{
+		return;
+	}
+
+	/*
 	 * Switch to appropriate context for constructing parsetrees.
 	 */
 	oldcontext = MemoryContextSwitchTo(MessageContext);
@@ -4500,3 +4509,606 @@ log_disconnections(int code, Datum arg)
 					port->user_name, port->database_name, port->remote_host,
 				  port->remote_port[0] ? " port=" : "", port->remote_port)));
 }
+
+typedef struct { 
+	char const*       query;
+	dlist_node        lru;
+	int64             exec_count;
+	CachedPlanSource* plan;	
+	int               n_params;
+	int16             format;
+	bool              disable_autoprepare;
+} plan_cache_entry;
+
+/*
+ * Replace string literals with parameters. We do not consider integer or real literals to avoid problems with 
+ * negative number, user defined operators, ... For example it is not easy to distinguish cases (-1), (1-1), (1-1)-1
+ */
+static void generalize_statement(const char *query_string, char** gen_query, char** query_params, int* n_params)
+{
+	size_t query_len = strlen(query_string);
+	char const* src = query_string;
+	char* dst;
+	char* params;
+	unsigned char ch;
+
+	*n_params = 0;
+
+	*gen_query = (char*)palloc(query_len*2); /* assume that we have less than 1000 parameters, the worst case is replacing '' with $999 */
+	*query_params = (char*)palloc(query_len + 1);
+	dst = *gen_query;
+	params = *query_params;
+
+	while ((ch = *src++) != '\0') { 
+		if (isspace(ch)) { 
+			/* Replace sequence of whitespaces with just one space */
+			while (*src && isspace(*(unsigned char*)src)) { 
+				src += 1;
+			}
+			*dst++ = ' ';
+		} else if (ch == '\'') { 
+			while (true) { 
+				ch = *src++;				
+				if (ch == '\'') { 
+					if (*src != '\'') { 
+						break;
+					} else {
+						/* escaped quote */
+						*params++ = '\'';
+						src += 1;
+					}
+				} else { 
+					*params++ = ch;
+				}
+			}
+			*params++ = '\0';
+			dst += sprintf(dst, "$%d", ++*n_params);
+		} else { 
+			*dst++ = ch;
+		}
+	}			
+	Assert(dst <= *gen_query + query_len);
+	Assert(params <= *query_params + query_len*2);
+	*dst = '\0';
+}
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return string_hash(((plan_cache_entry*)key)->query, 0);
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	return strcmp(((plan_cache_entry*)key1)->query, ((plan_cache_entry*)key2)->query);
+}
+
+static void* plan_cache_keycopy_fn(void *dest, const void *src, Size keysize)
+{ 
+	((plan_cache_entry*)dest)->query = pstrdup(((plan_cache_entry*)src)->query);
+    return dest;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+size_t nPlanCacheHits;
+size_t nPlanCacheMisses;
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool exec_cached_query(const char *query_string)
+{
+	CommandDest       dest = whereToSendOutput;
+	DestReceiver     *receiver;
+	char             *gen_query;
+	char             *query_params;
+	int               n_params;
+	plan_cache_entry *entry;
+	bool              found;
+	MemoryContext     old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo     params;
+	int               paramno;
+	CachedPlan       *cplan;
+	Portal		      portal;
+	bool		      was_logged = false;
+	bool		      is_xact_command;
+	bool		      execute_is_fetch;
+	char		      completion_tag[COMPLETION_TAG_BUFSIZE];
+	bool	 	      save_log_statement_stats = log_statement_stats;
+	ParamListInfo     portal_params;
+	const char       *source_text;
+	char		      msec_str[32];
+	bool		      snapshot_set = false;
+		
+	static HTAB*      plan_cache;
+	static dlist_head lru;
+	static size_t     n_cached_queries;
+	static MemoryContext plan_cache_context;
+
+	/* 
+	 * Extract literals from query
+	 */
+	generalize_statement(query_string, &gen_query, &query_params, &n_params);
+
+	if (plan_cache_context == NULL) { 
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/* 
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache == NULL) 
+	{ 
+		static HASHCTL info;
+		info.keysize = sizeof(char*);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		info.keycopy = plan_cache_keycopy_fn;	
+		plan_cache = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE, 
+								 &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);
+		dlist_init(&lru);
+	}
+	
+	/*
+	 * Lookup generalized query 
+	 */
+	entry = (plan_cache_entry*)hash_search(plan_cache, &gen_query, HASH_ENTER, &found);
+	if (!found) { 
+		if (++n_cached_queries > autoprepare_limit && autoprepare_limit != 0) { 
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, lru.head.prev);
+			DropCachedPlan(victim->plan);
+			hash_search(plan_cache, victim, HASH_REMOVE, NULL);
+			n_cached_queries -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+	} else { 
+		dlist_delete(&entry->lru);
+		if (entry->plan != NULL && !entry->plan->is_valid) { 
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&lru.head, &entry->lru);
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Prepare query only when it is executed more than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || entry->exec_count++ < autoprepare_threshold) { 
+		nPlanCacheMisses += 1;
+		return false;
+	}
+	if (entry->plan == NULL) { 
+		List       *parsetree_list;
+		Node       *raw_parse_tree;
+		const char *command_tag;
+		Query	   *query;
+		List	   *querytree_list;
+		Oid        *param_types = NULL;
+		int         num_params = 0;
+
+		old_context = MemoryContextSwitchTo(MessageContext);
+		
+		PG_TRY();
+		{    
+			parsetree_list = pg_parse_query(gen_query);
+			query_string = gen_query;
+		}
+		PG_CATCH();
+		{
+			elog(LOG, "Failed to autoprepare query \"%s\"", gen_query);
+			FlushErrorState();
+			MemoryContextSwitchTo(old_context);
+			entry->disable_autoprepare = true;
+			nPlanCacheMisses += 1;
+			return false;
+		}
+		PG_END_TRY();
+		
+		/*
+		 * Only single user statement are allowed in a prepared statement.
+		 */
+		if (list_length(parsetree_list) != 1) {
+			MemoryContextSwitchTo(old_context);
+			entry->disable_autoprepare = true;
+			nPlanCacheMisses += 1;
+			return false;
+		}
+
+		raw_parse_tree = (Node *) linitial(parsetree_list);
+		
+		/*
+		 * Get the command name for possible use in status display.
+		 */
+		command_tag = CreateCommandTag(raw_parse_tree);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ROLLBACK.  It is important that this test occur before we
+		 * try to do parse analysis, rewrite, or planning, since all those
+		 * phases try to do database accesses, which may fail in abort state.
+		 * (It might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, command_tag);
+
+		/*
+		 * Set up a snapshot if parse analysis will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		/*
+		 * Analyze and rewrite the query.  Note that the originally specified
+		 * parameter set is not required to be complete, so we have to use
+		 * parse_analyze_varparams().
+		 */
+		if (log_parser_stats) {
+			ResetUsage();
+		}
+
+		query = parse_analyze_varparams(raw_parse_tree,
+										query_string,
+										&param_types,
+										&num_params);
+		Assert(num_params == n_params);
+
+		/*
+		 * Check all parameter types got determined.
+		 */
+		for (paramno = 0; paramno < n_params; paramno++)
+		{
+			Oid			ptype = param_types[paramno];
+
+			if (ptype == InvalidOid || ptype == UNKNOWNOID) {
+				/* Type of parameter can not be determined */
+				MemoryContextSwitchTo(old_context);
+				entry->disable_autoprepare = true;
+				nPlanCacheMisses += 1;
+				return false;
+			}
+		}
+		
+		if (log_parser_stats) {
+			ShowUsage("PARSE ANALYSIS STATISTICS");
+		}
+
+		querytree_list = pg_rewrite_query(query);
+
+		/* Done with the snapshot used for parsing */
+		if (snapshot_set) {
+			PopActiveSnapshot();
+		}
+
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+
+		SaveCachedPlan(psrc);
+		entry->plan = psrc;
+		entry->n_params = n_params;
+		MemoryContextSwitchTo(old_context);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+	} else { 
+		psrc = entry->plan;
+		n_params = entry->n_params;
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.  We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.  We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
+		
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	} else { 
+		snapshot_set = false;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		params = (ParamListInfo) palloc(offsetof(ParamListInfoData, params) +
+										n_params * sizeof(ParamExternData));
+		params->paramFetch = NULL;
+		params->paramFetchArg = NULL;
+		params->parserSetup = NULL;
+		params->parserSetupArg = NULL;
+		params->numParams = n_params;
+		params->paramMask = NULL;
+		
+		for (paramno = 0; paramno < n_params; paramno++)
+		{
+			Oid			ptype = psrc->param_types[paramno];
+			Oid			typinput;
+			Oid			typioparam;
+			
+			getTypeInputInfo(ptype, &typinput, &typioparam);
+			
+			params->params[paramno].value = OidInputFunctionCall(typinput, query_params, typioparam, -1);
+			params->params[paramno].isnull = false;
+
+			/*
+			 * We mark the params as CONST.  This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+
+			query_params += strlen(query_params) + 1;
+		}
+	} else { 
+		params = NULL;
+	}
+		
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.  Any cruft from (re)planning
+	 * will be generated in MessageContext.  The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false);
+	
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set) {
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/* Does the portal contain a transaction command? */
+	is_xact_command = IsTransactionStmtList(portal->stmts);
+
+	/*
+	 * We must copy the sourceText into MessageContext in
+	 * case the portal is destroyed during finish_xact_command. Can avoid the
+	 * copy if it's not an xact command, though.
+	 */
+	if (is_xact_command)
+	{
+		source_text = pstrdup(portal->sourceText);
+		/*
+		 * An xact command shouldn't have any parameters, which is a good
+		 * thing because they wouldn't be around after finish_xact_command.
+		 */
+		portal_params = NULL;
+	}
+	else
+	{
+		source_text = portal->sourceText;
+		portal_params = portal->portalParams;
+	}
+
+	/*
+	 * Report query to various monitoring facilities.
+	 */
+	debug_query_string = source_text;
+
+	pgstat_report_activity(STATE_RUNNING, source_text);
+
+	set_ps_display(portal->commandTag, false);
+
+	if (save_log_statement_stats) {
+		ResetUsage();
+	}
+
+	BeginCommand(portal->commandTag, dest);
+
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+
+	/*
+	 * Create dest receiver in MessageContext (we don't want it in transaction
+	 * context, because that may get deleted if portal contains VACUUM).
+	 */
+	receiver = CreateDestReceiver(dest);
+	SetRemoteDestReceiverParams(receiver, portal);
+
+	/*
+	 * If we re-issue an Execute protocol request against an existing portal,
+	 * then we are only fetching more rows rather than completely re-executing
+	 * the query from the start. atStart is never reset for a v3 portal, so we
+	 * are safe to use this check.
+	 */
+	execute_is_fetch = !portal->atStart;
+
+	/* Log immediately if dictated by log_statement */
+	if (check_log_statement(portal->stmts))
+	{
+		ereport(LOG,
+				(errmsg("%s %s%s%s: %s",
+						execute_is_fetch ?
+						_("execute fetch from") :
+						_("execute"),
+						"<unnamed>",
+						"",
+						"",
+						source_text),
+				 errhidestmt(true),
+				 errdetail_params(portal_params)));
+		was_logged = true;
+	}
+
+	/* Check for cancel signal before we start execution */
+	CHECK_FOR_INTERRUPTS();
+
+	/*
+	 * Run the portal to completion, and then drop it (and the receiver).
+	 */
+	(void) PortalRun(portal,
+					 FETCH_ALL,
+					 true,
+					 receiver,
+					 receiver,
+					 completion_tag);
+
+	(*receiver->rDestroy) (receiver);
+
+	PortalDrop(portal, false);
+
+	/*
+	 * Tell client that we're done with this query.  Note we emit exactly
+	 * one EndCommand report for each raw parsetree, thus one for each SQL
+	 * command the client sent, regardless of rewriting. (But a command
+	 * aborted by error will not send an EndCommand report at all.)
+	 */
+	EndCommand(completion_tag, dest);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	/*
+	 * Emit duration logging if appropriate.
+	 */
+	switch (check_log_duration(msec_str, was_logged))
+	{
+		case 1:
+			ereport(LOG,
+					(errmsg("duration: %s ms", msec_str),
+					 errhidestmt(true)));
+			break;
+		case 2:
+			ereport(LOG,
+					(errmsg("duration: %s ms  %s %s%s%s: %s",
+							msec_str,
+							execute_is_fetch ?
+							_("execute fetch from") :
+							_("execute"),
+							"<unnamed>",
+							"",
+							"",
+							source_text),
+					 errhidestmt(true),
+					 errdetail_params(portal_params)));
+			break;
+	}
+
+	if (save_log_statement_stats) {
+		ShowUsage("EXECUTE MESSAGE STATISTICS");
+	}
+	debug_query_string = NULL;
+	nPlanCacheHits += 1;
+
+	return true;
+}
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 4f1891f..4bb5b65 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -450,6 +450,10 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -1949,6 +1953,28 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 NULL,
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 NULL,
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 0bf9f21..f035ce7 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -252,6 +252,9 @@ extern int	client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
#28Pavel Stehule
pavel.stehule@gmail.com
In reply to: Konstantin Knizhnik (#27)
Re: Cached plans and statement generalization

2017-04-26 12:30 GMT+02:00 Konstantin Knizhnik <k.knizhnik@postgrespro.ru>:

On 26.04.2017 10:49, Konstantin Knizhnik wrote:

On 26.04.2017 04:00, Tsunakawa, Takayuki wrote: Are you considering some
upper limit on the number of prepared statements?
In this case we need some kind of LRU for maintaining cache of
autoprepared statements.
I think that it is good idea to have such limited cached - it can avoid
memory overflow problem.
I will try to implement it.

I attach new patch which allows to limit the number of autoprepared
statements (autoprepare_limit GUC variable).
Also I did more measurements, now with several concurrent connections and
read-only statements.
Results of pgbench with 10 connections, scale 10 and read-only statements
are below:

Protocol
TPS
extended
87k
prepared
209k
simple+autoprepare
206k

As you can see, autoprepare provides more than 2 times speed improvement.

Also I tried to measure overhead of parsing (to be able to substitute all
literals, not only string literals).
I just added extra call of pg_parse_query. Speed is reduced to 181k.
So overhead is noticeable, but still making such optimization useful.
This is why I want to ask question: is it better to implement slower but
safer and more universal solution?

Unsafe solution has not any sense, and it is dangerous (80% of database
users has not necessary knowledge). If somebody needs the max possible
performance, then he use explicit prepared statements.

Regards

Pavel

Show quoted text

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#29Doug Doole
ddoole@salesforce.com
In reply to: Doug Doole (#20)
Re: Cached plans and statement generalization

A naive option would be to invalidate anything that depends on table or
view *.FOOBAR. You could probably make it a bit smarter by also requiring
that schema A appear in the path.

This has been rumbling around in my head. I wonder if you could solve this
problem by registering dependencies on objects which don't yet exist.
Consider:

CREATE TABLE C.T1(...);
CREATE TABLE C.T2(...);
SET search_path='A,B,C,D';
SELECT * FROM C.T1, T2;

For T1, you'd register a hard dependency on C.T1 and no virtual
dependencies since the table is explicitly qualified.

For T2, you'd register a hard dependency on C.T2 since that is the table
that was selected for the query. You'd also register virtual dependencies
on A.T2 and B.T2 since if either of those tables (or views) are created you
need to recompile the statement. (Note that no virtual dependency is
created on D.T2() since that table would never be selected by the compiler.)

The catch is that virtual dependencies would have to be recorded and
searched as strings, not OIDs since the objects don't exist. Virtual
dependencies only have to be checked during CREATE processing though, so
that might not be too bad.

But this is getting off topic - I just wanted to capture the idea while it
was rumbling around.

#30Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Doug Doole (#29)
Re: Cached plans and statement generalization

On 04/26/2017 08:08 PM, Doug Doole wrote:

A naive option would be to invalidate anything that depends on table or view *.FOOBAR. You could probably make it a bit smarter by also requiring that schema A appear in the path.

This has been rumbling around in my head. I wonder if you could solve this problem by registering dependencies on objects which don't yet exist. Consider:

CREATE TABLE C.T1(...);
CREATE TABLE C.T2(...);
SET search_path='A,B,C,D';
SELECT * FROM C.T1, T2;

For T1, you'd register a hard dependency on C.T1 and no virtual dependencies since the table is explicitly qualified.

For T2, you'd register a hard dependency on C.T2 since that is the table that was selected for the query. You'd also register virtual dependencies on A.T2 and B.T2 since if either of those tables (or views) are created you need to recompile the
statement. (Note that no virtual dependency is created on D.T2() since that table would never be selected by the compiler.)

The catch is that virtual dependencies would have to be recorded and searched as strings, not OIDs since the objects don't exist. Virtual dependencies only have to be checked during CREATE processing though, so that might not be too bad.

But this is getting off topic - I just wanted to capture the idea while it was rumbling around.

I think that it will be enough to handle modification of search path and invalidate prepared statements cache in this case.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#31Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Pavel Stehule (#28)
1 attachment(s)
Re: Cached plans and statement generalization

On 26.04.2017 13:46, Pavel Stehule wrote:

I attach new patch which allows to limit the number of
autoprepared statements (autoprepare_limit GUC variable).
Also I did more measurements, now with several concurrent
connections and read-only statements.
Results of pgbench with 10 connections, scale 10 and read-only
statements are below:

Protocol
TPS
extended
87k
prepared
209k
simple+autoprepare
206k

As you can see, autoprepare provides more than 2 times speed
improvement.

Also I tried to measure overhead of parsing (to be able to
substitute all literals, not only string literals).
I just added extra call of pg_parse_query. Speed is reduced to 181k.
So overhead is noticeable, but still making such optimization useful.
This is why I want to ask question: is it better to implement
slower but safer and more universal solution?

Unsafe solution has not any sense, and it is dangerous (80% of
database users has not necessary knowledge). If somebody needs the max
possible performance, then he use explicit prepared statements.

I attached new patch to this mail. I completely reimplement my original
approach and now use parse tree transformation.
New pgbench (-S -c 10) results are the following:

Protocol
TPS
extended
87k
prepared
209k
simple+autoprepare
185k

So there is some slowdown comparing with my original implementation and
explicitly prepared statements, but still it provide more than two times
speed-up comparing with unprepared queries. And it doesn't require to
change existed applications.
As far as most of real production application are working with DBMS
through some connection pool (pgbouncer,...), I think that such
optimization will be useful.
Isn't it interesting if If we can increase system throughput almost two
times by just setting one parameter in configuration file?

I also tried to enable autoprepare by default and run regression tests.
7 tests are not passed because of the following reasons:
1. Slightly different error reporting (for example error location is not
always identically specified).
2. Difference in query behavior caused by changed local settings
(Andres gives an example with search_path, and date test is failed
because of changing datestyle).
3. Problems with indirect dependencies (when table is altered only
cached plans directly depending on this relation and invalidated, but
not plans with indirect dependencies).
4. Not performing domain checks for null values.

I do not think that this issues can cause problems for real application.

Also it is possible to limit number of autoprepared statements using
autoprepare_limit parameter, avoid possible backend memory overflow in
case of larger number of unique queries sent by application. LRU
discipline is used to drop least recently used plans.

Any comments and suggestions for future improvement of this patch are
welcome.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare.patchtext/x-patch; name=autoprepare.patchDownload
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index cd39167..4fbc8b7 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3610,6 +3610,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree. 
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ * 
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ * 
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink    *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr    *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+		    return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c
index f8d43db..05bce20 100644
--- a/src/backend/nodes/outfuncs.c
+++ b/src/backend/nodes/outfuncs.c
@@ -2506,6 +2506,44 @@ _outSelectStmt(StringInfo str, const SelectStmt *node)
 }
 
 static void
+_outInsertStmt(StringInfo str, const InsertStmt *node)
+{
+	WRITE_NODE_TYPE("INSERT");
+
+	WRITE_NODE_FIELD(relation);
+	WRITE_NODE_FIELD(cols);
+	WRITE_NODE_FIELD(selectStmt);
+	WRITE_NODE_FIELD(onConflictClause);
+	WRITE_NODE_FIELD(returningList);
+	WRITE_NODE_FIELD(withClause);
+}
+
+static void
+_outDeleteStmt(StringInfo str, const DeleteStmt *node)
+{
+	WRITE_NODE_TYPE("DELETE");
+
+	WRITE_NODE_FIELD(relation);
+	WRITE_NODE_FIELD(usingClause);
+	WRITE_NODE_FIELD(whereClause);
+	WRITE_NODE_FIELD(returningList);
+	WRITE_NODE_FIELD(withClause);
+}
+
+static void
+_outUpdateStmt(StringInfo str, const UpdateStmt *node)
+{
+	WRITE_NODE_TYPE("UPDATE");
+
+	WRITE_NODE_FIELD(relation);
+	WRITE_NODE_FIELD(targetList);
+	WRITE_NODE_FIELD(whereClause);
+	WRITE_NODE_FIELD(fromClause);
+	WRITE_NODE_FIELD(returningList);
+	WRITE_NODE_FIELD(withClause);
+}
+
+static void
 _outFuncCall(StringInfo str, const FuncCall *node)
 {
 	WRITE_NODE_TYPE("FUNCCALL");
@@ -3733,6 +3771,15 @@ outNode(StringInfo str, const void *obj)
 			case T_SelectStmt:
 				_outSelectStmt(str, obj);
 				break;
+			case T_UpdateStmt:
+				_outUpdateStmt(str, obj);
+				break;
+			case T_DeleteStmt:
+				_outDeleteStmt(str, obj);
+				break;
+			case T_InsertStmt:
+				_outInsertStmt(str, obj);
+				break;
 			case T_ColumnDef:
 				_outColumnDef(str, obj);
 				break;
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f6be98b..5fee0b5 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -42,11 +42,13 @@
 #include "catalog/pg_type.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
+#include "commands/defrem.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
 #include "nodes/print.h"
+#include "nodes/nodeFuncs.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
 #include "pg_trace.h"
@@ -69,6 +71,7 @@
 #include "tcop/utility.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
+#include "utils/builtins.h"
 #include "utils/ps_status.h"
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
@@ -188,6 +191,7 @@ static bool IsTransactionStmtList(List *parseTrees);
 static void drop_unnamed_stmt(void);
 static void SigHupHandler(SIGNAL_ARGS);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, Node* parse_tree);
 
 
 /* ----------------------------------------------------------------
@@ -951,6 +955,14 @@ exec_simple_query(const char *query_string)
 	isTopLevel = (list_length(parsetree_list) == 1);
 
 	/*
+	 * Try to find cached plan
+	 */
+	if (isTopLevel && autoprepare_threshold != 0 && exec_cached_query(query_string, linitial(parsetree_list)))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -4500,3 +4512,842 @@ log_disconnections(int code, Datum arg)
 					port->user_name, port->database_name, port->remote_host,
 				  port->remote_port[0] ? " port=" : "", port->remote_port)));
 }
+
+/*
+ * Autoprepare implementation.
+ * It combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Plan cache entry
+ */
+typedef struct {
+	Node*             parse_tree; /* tree is used as hash key */
+	dlist_node        lru;        /* double linked list to implement LRU */
+	int64             exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32            hash;       /* hash calculated for this parsed tree */
+	int               n_params;	  /* number of parameters extracted for this query */
+	int16             format;     /* portal output format */
+	bool              disable_autoprepare; /* disable preparing of this query */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	return !equal(((plan_cache_entry*)key1)->parse_tree, ((plan_cache_entry*)key2)->parse_tree);
+}
+
+static void* plan_cache_keycopy_fn(void *dest, const void *src, Size keysize)
+{
+	plan_cache_entry* dst_entry = (plan_cache_entry*)dest;
+	plan_cache_entry* src_entry = (plan_cache_entry*)src;
+	dst_entry->parse_tree = copyObject(src_entry->parse_tree);
+	dst_entry->hash = src_entry->hash;
+    return dest;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t n_plan_cache_hits;
+size_t n_plan_cache_misses;
+size_t n_cached_queries;
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ConstParam {
+	A_Const*     literal; /* Original literal */
+	ParamRef*    param;   /* Constructed parameter reference */
+	Node**       ref;     /* Pointer to pointer to literal node (used to revert parse tree update) */
+	struct ConstParam* next; /* L1-list of query parameters */
+} ConstParam;
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int          n_params; /* Number of extracted parameters */
+	uint32       hash;     /* We calculate hash for parse tree during plan traversal */
+	ConstParam** param_list_tail; /* pointer to last element "next" field address, used to contruct L1 list of parameters */
+} GeneralizerCtx;
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Callback for raw_expression_tree_mutator performing susbtitution of literals with paramaters
+ */
+static bool
+query_plan_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparion of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non priniciple, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 dirrent nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the the same for all queries and optimized by compiler)
+			 */
+		    if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (query_plan_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (query_plan_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+	    case T_A_Const:
+		{
+			/*
+			 * Do sunstitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			ConstParam* cp = palloc(sizeof(ConstParam));
+			ParamRef* param = makeNode(ParamRef);
+			param->number = ++ctx->n_params;
+			param->location = literal->location;
+			cp->ref = ref;
+			cp->param = param;
+			cp->literal = literal;
+			*ctx->param_list_tail = cp;
+			ctx->param_list_tail = &cp->next;
+			*ref = (Node*)param;
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals onlu in WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (query_plan_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (query_plan_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (query_plan_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not pe autoprepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, query_plan_generalizer, context);
+	}
+	return false;
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(Node* parse_tree, ConstParam* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+	n_plan_cache_misses += 1;
+}
+
+/*
+ * Callback for raw_expression_tree_walker droping parse tree
+ */
+static bool drop_tree_node(Node* node, void* context)
+{
+	if (node) {
+		raw_expression_tree_walker(node, drop_tree_node, NULL);
+		pfree(node);
+	}
+	return false;
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+ static bool exec_cached_query(const char *query_string, Node* parse_tree)
+{
+	CommandDest       dest = whereToSendOutput;
+	DestReceiver     *receiver;
+	int               n_params;
+	plan_cache_entry *entry;
+	bool              found;
+	MemoryContext     old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo     params;
+	int               paramno;
+	CachedPlan       *cplan;
+	Portal		      portal;
+	bool		      was_logged = false;
+	bool		      is_xact_command;
+	bool		      execute_is_fetch;
+	char		      completion_tag[COMPLETION_TAG_BUFSIZE];
+	bool	 	      save_log_statement_stats = log_statement_stats;
+	ParamListInfo     portal_params;
+	const char       *source_text;
+	char		      msec_str[32];
+	bool		      snapshot_set = false;
+	GeneralizerCtx    ctx;
+	ConstParam*       const_param;
+	ConstParam*       const_param_list;
+	plan_cache_entry  pattern;
+
+	static HTAB*      plan_cache; /* hash table for plan cache */
+	static dlist_head lru;        /* LRU L2-list for cached queries */
+	static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+	/*
+	 * Substitute literals with parameters and calculate hash for parse tree
+	 */
+	ctx.param_list_tail = &const_param_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (query_plan_generalizer((Node**)&parse_tree, &ctx)) {
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(parse_tree, const_param_list);
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL) {
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		info.keycopy = plan_cache_keycopy_fn;
+		plan_cache = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+								 &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);
+		dlist_init(&lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = parse_tree;
+	pattern.hash = ctx.hash;
+	entry = (plan_cache_entry*)hash_search(plan_cache, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		if (++n_cached_queries > autoprepare_limit && autoprepare_limit != 0)
+		{
+			/* Drop least recently access query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, lru.head.prev);
+			Node* dropped_tree = victim->parse_tree;
+			dlist_delete(&victim->lru);
+			DropCachedPlan(victim->plan);
+			hash_search(plan_cache, victim, HASH_REMOVE, NULL);
+			raw_expression_tree_walker(dropped_tree, drop_tree_node, NULL);
+			n_cached_queries -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid) {
+			/* Drop invalidated plan: it will be reconstructed later */
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+	/*
+	 * Prepare query only when it is executed more than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || entry->exec_count++ < autoprepare_threshold)
+	{
+		undo_query_plan_changes(parse_tree, const_param_list);
+		return false;
+	}
+	if (entry->plan == NULL)
+	{
+		/*
+		 * Prepare new plan
+		 */
+		const char *command_tag;
+		Query	   *query;
+		List	   *querytree_list;
+		Oid        *param_types = NULL;
+		int         num_params = 0;
+
+		/*
+		 * Switch to appropriate context for constructing parsetrees.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for possible use in status display.
+		 */
+		command_tag = CreateCommandTag(parse_tree);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ROLLBACK.  It is important that this test occur before we
+		 * try to do parse analysis, rewrite, or planning, since all those
+		 * phases try to do database accesses, which may fail in abort state.
+		 * (It might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(parse_tree))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(parse_tree, query_string, command_tag);
+
+		/*
+		 * Set up a snapshot if parse analysis will need one.
+		 */
+		if (analyze_requires_snapshot(parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		/*
+		 * Analyze and rewrite the query.  Note that the originally specified
+		 * parameter set is not required to be complete, so we have to use
+		 * parse_analyze_varparams().
+		 */
+		if (log_parser_stats) {
+			ResetUsage();
+		}
+
+		PG_TRY();
+		{
+			query = parse_analyze_varparams(parse_tree,
+											query_string,
+											&param_types,
+											&num_params);
+		}
+		PG_CATCH();
+		{
+			/*
+			 * In case of analyze errors revert back to original query processing
+			 * and disable autoprepare for this query to avoid such problems in future.
+			 */
+			FlushErrorState();
+			if (snapshot_set) {
+				PopActiveSnapshot();
+			}
+			entry->disable_autoprepare = true;
+			undo_query_plan_changes(parse_tree, const_param_list);
+			MemoryContextSwitchTo(old_context);
+			return false;
+		}
+		PG_END_TRY();
+
+		Assert(num_params == n_params);
+
+		/*
+		 * Check all parameter types got determined.
+		 */
+		for (paramno = 0, const_param = const_param_list;
+			 paramno < n_params;
+			 paramno++, const_param = const_param->next)
+		{
+			Oid			ptype = param_types[paramno];
+
+			/*
+			 * Check if type of parameter can be infered from query and is compatible with actual literal type.
+			 * We explicitly exclude some cases when parameter type is wrongly assumed to be TEXT.
+			 * Hopefully there will be few such misdetections in real queries.
+			 */
+			if (ptype == InvalidOid || ptype == UNKNOWNOID
+				|| (ptype == TEXTOID && (const_param->literal->val.type == T_BitString || const_param->literal->val.type == T_Integer)))
+			{
+				/* Type of parameter can not be determined: disable autoprepare for this query. */
+				if (snapshot_set) {
+					PopActiveSnapshot();
+				}
+				entry->disable_autoprepare = true;
+				undo_query_plan_changes(parse_tree, const_param_list);
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+		}
+
+		if (log_parser_stats) {
+			ShowUsage("PARSE ANALYSIS STATISTICS");
+		}
+
+		querytree_list = pg_rewrite_query(query);
+
+		/* Done with the snapshot used for parsing */
+		if (snapshot_set) {
+			PopActiveSnapshot();
+			snapshot_set = false;
+		}
+
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(parse_tree, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)parse_tree;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+
+		/*
+		 * Register cached plan for invalidation mechanism
+		 */
+		SaveCachedPlan(psrc);
+		entry->plan = psrc;
+		entry->n_params = n_params;
+
+		MemoryContextSwitchTo(old_context); /* Done with message context */
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.  We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.  We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+		Oid  typinput;
+		Oid  typioparam;
+		char buf[64];
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		params = (ParamListInfo) palloc(offsetof(ParamListInfoData, params) +
+										n_params * sizeof(ParamExternData));
+		params->paramFetch = NULL;
+		params->paramFetchArg = NULL;
+		params->parserSetup = NULL;
+		params->parserSetupArg = NULL;
+		params->numParams = n_params;
+		params->paramMask = NULL;
+
+		for (paramno = 0, const_param = const_param_list;
+			 paramno < n_params;
+			 paramno++, const_param = const_param->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+
+			param_location = const_param->literal->location;
+
+			params->params[paramno].isnull = false;
+
+			/* Convert literal value to parameter value */
+			switch (const_param->literal->val.type)
+			{
+			  /*
+			   * Convert from integer literal
+			   */
+			  case T_Integer:
+				switch (ptype) {
+				  case INT8OID:
+					params->params[paramno].value = Int64GetDatum((int64)const_param->literal->val.val.ival);
+					break;
+				  case INT4OID:
+					params->params[paramno].value = Int32GetDatum((int32)const_param->literal->val.val.ival);
+					break;
+				  case INT2OID:
+					if (const_param->literal->val.val.ival < SHRT_MIN
+						|| const_param->literal->val.val.ival > SHRT_MAX)
+					{
+						ereport(ERROR,
+								(errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
+								 errmsg("smallint out of range")));
+					}
+					params->params[paramno].value = Int16GetDatum((int16)const_param->literal->val.val.ival);
+					break;
+				  case FLOAT4OID:
+					params->params[paramno].value = Float4GetDatum((float)const_param->literal->val.val.ival);
+					break;
+				  case FLOAT8OID:
+					params->params[paramno].value = Float8GetDatum((double)const_param->literal->val.val.ival);
+					break;
+				  case INT4RANGEOID:
+					sprintf(buf, "[%ld,%ld]", const_param->literal->val.val.ival, const_param->literal->val.val.ival);
+					getTypeInputInfo(ptype, &typinput, &typioparam);
+					params->params[paramno].value = OidInputFunctionCall(typinput, buf, typioparam, -1);
+					break;
+				  default:
+					pg_lltoa(const_param->literal->val.val.ival, buf);
+					getTypeInputInfo(ptype, &typinput, &typioparam);
+					params->params[paramno].value = OidInputFunctionCall(typinput, buf, typioparam, -1);
+				}
+				break;
+			  case T_Null:
+				params->params[paramno].isnull = true;
+				break;
+			  default:
+				/*
+				 * Convert from string literal
+				 */
+				getTypeInputInfo(ptype, &typinput, &typioparam);
+				params->params[paramno].value = OidInputFunctionCall(typinput, const_param->literal->val.val.str, typioparam, -1);
+			}
+			/*
+			 * We mark the params as CONST.  This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	} else {
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.  Any cruft from (re)planning
+	 * will be generated in MessageContext.  The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set) {
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/* Does the portal contain a transaction command? */
+	is_xact_command = IsTransactionStmtList(portal->stmts);
+
+	/*
+	 * We must copy the sourceText into MessageContext in
+	 * case the portal is destroyed during finish_xact_command. Can avoid the
+	 * copy if it's not an xact command, though.
+	 */
+	if (is_xact_command)
+	{
+		source_text = pstrdup(portal->sourceText);
+		/*
+		 * An xact command shouldn't have any parameters, which is a good
+		 * thing because they wouldn't be around after finish_xact_command.
+		 */
+		portal_params = NULL;
+	}
+	else
+	{
+		source_text = portal->sourceText;
+		portal_params = portal->portalParams;
+	}
+
+	/*
+	 * Report query to various monitoring facilities.
+	 */
+	debug_query_string = source_text;
+
+	pgstat_report_activity(STATE_RUNNING, source_text);
+
+	set_ps_display(portal->commandTag, false);
+
+	if (save_log_statement_stats) {
+		ResetUsage();
+	}
+
+	BeginCommand(portal->commandTag, dest);
+
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+
+	/*
+	 * Create dest receiver in MessageContext (we don't want it in transaction
+	 * context, because that may get deleted if portal contains VACUUM).
+	 */
+	receiver = CreateDestReceiver(dest);
+	if (dest == DestRemote) {
+		SetRemoteDestReceiverParams(receiver, portal);
+	}
+
+	/*
+	 * If we re-issue an Execute protocol request against an existing portal,
+	 * then we are only fetching more rows rather than completely re-executing
+	 * the query from the start. atStart is never reset for a v3 portal, so we
+	 * are safe to use this check.
+	 */
+	execute_is_fetch = !portal->atStart;
+
+	/* Log immediately if dictated by log_statement */
+	if (check_log_statement(portal->stmts))
+	{
+		ereport(LOG,
+				(errmsg("%s %s%s%s: %s",
+						execute_is_fetch ?
+						_("execute fetch from") :
+						_("execute"),
+						"<unnamed>",
+						"",
+						"",
+						source_text),
+				 errhidestmt(true),
+				 errdetail_params(portal_params)));
+		was_logged = true;
+	}
+
+	/* Check for cancel signal before we start execution */
+	CHECK_FOR_INTERRUPTS();
+
+	/*
+	 * Run the portal to completion, and then drop it (and the receiver).
+	 */
+	(void) PortalRun(portal,
+					 FETCH_ALL,
+					 true,
+					 receiver,
+					 receiver,
+					 completion_tag);
+
+	(*receiver->rDestroy) (receiver);
+
+	PortalDrop(portal, false);
+
+	/*
+	 * Tell client that we're done with this query.  Note we emit exactly
+	 * one EndCommand report for each raw parsetree, thus one for each SQL
+	 * command the client sent, regardless of rewriting. (But a command
+	 * aborted by error will not send an EndCommand report at all.)
+	 */
+	EndCommand(completion_tag, dest);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	/*
+	 * Emit duration logging if appropriate.
+	 */
+	switch (check_log_duration(msec_str, was_logged))
+	{
+		case 1:
+			ereport(LOG,
+					(errmsg("duration: %s ms", msec_str),
+					 errhidestmt(true)));
+			break;
+		case 2:
+			ereport(LOG,
+					(errmsg("duration: %s ms  %s %s%s%s: %s",
+							msec_str,
+							execute_is_fetch ?
+							_("execute fetch from") :
+							_("execute"),
+							"<unnamed>",
+							"",
+							"",
+							source_text),
+					 errhidestmt(true),
+					 errdetail_params(portal_params)));
+			break;
+	}
+
+	if (save_log_statement_stats) {
+		ShowUsage("EXECUTE MESSAGE STATISTICS");
+	}
+	debug_query_string = NULL;
+	n_plan_cache_hits += 1;
+
+	return true;
+}
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 4f1891f..859476f 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -450,6 +450,10 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -1949,6 +1953,28 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare."),
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries."),
+		},
+		&autoprepare_limit,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 97af142..29a5216 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -73,8 +73,11 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 												   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+												   void *context);
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 											  void *context);
 
+
 #endif   /* NODEFUNCS_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 0bf9f21..f035ce7 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -252,6 +252,9 @@ extern int	client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
#32Robert Haas
robertmhaas@gmail.com
In reply to: Konstantin Knizhnik (#31)
Re: Cached plans and statement generalization

On Fri, Apr 28, 2017 at 6:01 AM, Konstantin Knizhnik <
k.knizhnik@postgrespro.ru> wrote:

Any comments and suggestions for future improvement of this patch are
welcome.

+        PG_TRY();
+        {
+            query = parse_analyze_varparams(parse_tree,
+                                            query_string,
+                                            &param_types,
+                                            &num_params);
+        }
+        PG_CATCH();
+        {
+            /*
+             * In case of analyze errors revert back to original query
processing
+             * and disable autoprepare for this query to avoid such
problems in future.
+             */
+            FlushErrorState();
+            if (snapshot_set) {
+                PopActiveSnapshot();
+            }
+            entry->disable_autoprepare = true;
+            undo_query_plan_changes(parse_tree, const_param_list);
+            MemoryContextSwitchTo(old_context);
+            return false;
+        }
+        PG_END_TRY();

This is definitely not a safe way of using TRY/CATCH.

+
+            /* Convert literal value to parameter value */
+            switch (const_param->literal->val.type)
+            {
+              /*
+               * Convert from integer literal
+               */
+              case T_Integer:
+                switch (ptype) {
+                  case INT8OID:
+                    params->params[paramno].value =
Int64GetDatum((int64)const_param->literal->val.val.ival);
+                    break;
+                  case INT4OID:
+                    params->params[paramno].value =
Int32GetDatum((int32)const_param->literal->val.val.ival);
+                    break;
+                  case INT2OID:
+                    if (const_param->literal->val.val.ival < SHRT_MIN
+                        || const_param->literal->val.val.ival > SHRT_MAX)
+                    {
+                        ereport(ERROR,
+
(errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
+                                 errmsg("smallint out of range")));
+                    }
+                    params->params[paramno].value =
Int16GetDatum((int16)const_param->literal->val.val.ival);
+                    break;
+                  case FLOAT4OID:
+                    params->params[paramno].value =
Float4GetDatum((float)const_param->literal->val.val.ival);
+                    break;
+                  case FLOAT8OID:
+                    params->params[paramno].value =
Float8GetDatum((double)const_param->literal->val.val.ival);
+                    break;
+                  case INT4RANGEOID:
+                    sprintf(buf, "[%ld,%ld]",
const_param->literal->val.val.ival, const_param->literal->val.val.ival);
+                    getTypeInputInfo(ptype, &typinput, &typioparam);
+                    params->params[paramno].value =
OidInputFunctionCall(typinput, buf, typioparam, -1);
+                    break;
+                  default:
+                    pg_lltoa(const_param->literal->val.val.ival, buf);
+                    getTypeInputInfo(ptype, &typinput, &typioparam);
+                    params->params[paramno].value =
OidInputFunctionCall(typinput, buf, typioparam, -1);
+                }
+                break;
+              case T_Null:
+                params->params[paramno].isnull = true;
+                break;
+              default:
+                /*
+                 * Convert from string literal
+                 */
+                getTypeInputInfo(ptype, &typinput, &typioparam);
+                params->params[paramno].value =
OidInputFunctionCall(typinput, const_param->literal->val.val.str,
typioparam, -1);
+            }

I don't see something with a bunch of hard-coded rules for particular type
OIDs having any chance of being acceptable.

This patch seems to duplicate a large amount of existing code. That would
be a good thing to avoid.

It could use a visit from the style police and a spell-checker, too.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#33Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Robert Haas (#32)
Re: Cached plans and statement generalization

On 01.05.2017 18:52, Robert Haas wrote:

On Fri, Apr 28, 2017 at 6:01 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:

Any comments and suggestions for future improvement of this patch
are welcome.

+        PG_TRY();
+        {
+            query = parse_analyze_varparams(parse_tree,
+                                            query_string,
+ &param_types,
+ &num_params);
+        }
+        PG_CATCH();
+        {
+            /*
+             * In case of analyze errors revert back to original 
query processing
+             * and disable autoprepare for this query to avoid such 
problems in future.
+             */
+            FlushErrorState();
+            if (snapshot_set) {
+                PopActiveSnapshot();
+            }
+            entry->disable_autoprepare = true;
+            undo_query_plan_changes(parse_tree, const_param_list);
+            MemoryContextSwitchTo(old_context);
+            return false;
+        }
+        PG_END_TRY();

This is definitely not a safe way of using TRY/CATCH.

+
+            /* Convert literal value to parameter value */
+            switch (const_param->literal->val.type)
+            {
+              /*
+               * Convert from integer literal
+               */
+              case T_Integer:
+                switch (ptype) {
+                  case INT8OID:
+                    params->params[paramno].value = 
Int64GetDatum((int64)const_param->literal->val.val.ival);
+                    break;
+                  case INT4OID:
+                    params->params[paramno].value = 
Int32GetDatum((int32)const_param->literal->val.val.ival);
+                    break;
+                  case INT2OID:
+                    if (const_param->literal->val.val.ival < SHRT_MIN
+                        || const_param->literal->val.val.ival > SHRT_MAX)
+                    {
+                        ereport(ERROR,
+ (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
+                                 errmsg("smallint out of range")));
+                    }
+                    params->params[paramno].value = 
Int16GetDatum((int16)const_param->literal->val.val.ival);
+                    break;
+                  case FLOAT4OID:
+                    params->params[paramno].value = 
Float4GetDatum((float)const_param->literal->val.val.ival);
+                    break;
+                  case FLOAT8OID:
+                    params->params[paramno].value = 
Float8GetDatum((double)const_param->literal->val.val.ival);
+                    break;
+                  case INT4RANGEOID:
+                    sprintf(buf, "[%ld,%ld]", 
const_param->literal->val.val.ival, const_param->literal->val.val.ival);
+                    getTypeInputInfo(ptype, &typinput, &typioparam);
+                    params->params[paramno].value = 
OidInputFunctionCall(typinput, buf, typioparam, -1);
+                    break;
+                  default:
+ pg_lltoa(const_param->literal->val.val.ival, buf);
+                    getTypeInputInfo(ptype, &typinput, &typioparam);
+                    params->params[paramno].value = 
OidInputFunctionCall(typinput, buf, typioparam, -1);
+                }
+                break;
+              case T_Null:
+                params->params[paramno].isnull = true;
+                break;
+              default:
+                /*
+                 * Convert from string literal
+                 */
+                getTypeInputInfo(ptype, &typinput, &typioparam);
+                params->params[paramno].value = 
OidInputFunctionCall(typinput, const_param->literal->val.val.str, 
typioparam, -1);
+            }

I don't see something with a bunch of hard-coded rules for particular
type OIDs having any chance of being acceptable.

Well, what I need is to convert literal value represented in Value
struct to parameter datum value.
Struct "value" contains union with integer literal and text.
So this peace of code is just provides efficient handling of most common
cases (integer parameters) and uses type's input function in other cases.

This patch seems to duplicate a large amount of existing code. That
would be a good thing to avoid.

Yes, I have to copy a lot of code from exec_parse_message +
exec_bind_message + exec_execute_message functions.
Definitely copying of code is bad flaw. It will be much better and
easier just to call three original functions instead of mixing gathering
their code into the new function.
But I failed to do it because
1. Autoprepare should be integrated into exec_simple_query. Before
executing query in normal way, I need to perform cache lookup for
previously prepared plan for this generalized query.
And generalization of query requires building of query tree (query
parsing). In other words, parsing should be done before I can call
exec_parse_message.
2. exec_bind_message works with parameters passed by client though
libpwq protocol, while autoprepare deals with values of parameters
extracted from literals.
3. I do not want to generate dummy name for autoprepared query to handle
it as normal prepared statement.
And I can not use unnamed statements because I want lifetime of
autoprepared statements will be larger than one statement.
4. I have to use slightly different memory context policy than named or
unnamed prepared statements.

Also this three exec_* functions contain prolog/epilog code which is
needed because them are serving separate libpq requests.
But in case of autoprepared statements them need to be executed in the
context of single libpq message, so most of this code is redundant.

It could use a visit from the style police and a spell-checker, too.

I will definitely fix style and misspelling - I have not submitted yet
this patch to commit fest and there is long enough time to next commitfest.
My primary intention of publishing this patch is receive feedback on the
proposed approach.
I already got two very useful advices: limit number of cached statements
and pay more attention to safety.
This is why I have reimplemented my original approach with substituting
string literals with parameters without building parse tree.

Right now I am mostly thinking about two problems:
1. Finding out best criteria of detecting literals which need to be
replaced with parameters and which not. It is clear that replacing
"limit 10" with "limit $10"
will have not so much sense and can cause worse execution plan. So right
now I just ignore sort, group by and limit parts. But may be it is
possible to find some more flexible approach.
2. Which type to chose for parameters. I can try to infer type from
context (current solution), or try to use type of literal.
The problem with first approach is that query compiler is not always
able to do it and even if type can be determined, it may be too generic
(for example numeric instead of real
or range instead of integer). The problem with second approach is
opposite: type inferred from literal type can be too restrictive - quite
often integer literals are used to specify values of floating point
constant. The best solution is first try to determine parameter type
from context and then refine it based on literal type. But it will
require repeat of query analysis.
Not sure if it is possible.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Thanks for your feedback.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#34Robert Haas
robertmhaas@gmail.com
In reply to: Konstantin Knizhnik (#33)
Re: Cached plans and statement generalization

On Tue, May 2, 2017 at 5:50 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

I don't see something with a bunch of hard-coded rules for particular type
OIDs having any chance of being acceptable.

Well, what I need is ...

Regarding this...

Definitely copying of code is bad flaw. It will be much better and easier
just to call three original functions instead of mixing gathering their code
into the new function.
But I failed to do it because ...

...and also this:

I am sympathetic to the fact that this is a hard problem to solve.
I'm just telling you that the way you've got it is not acceptable and
nobody's going to commit it like that (or if they do, they will end up
having to revert it). If you want to have a technical discussion
about what might be a way to change the patch to be more acceptable,
cool, but I don't want to get into a long debate about whether what
you have is acceptable or not; I've already said what I think about
that and I believe that opinion will be widely shared. I am not
trying to beat you up here, just trying to be clear.

Thanks for your feedback.

Sure thing.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#35Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Robert Haas (#34)
2 attachment(s)
Re: Cached plans and statement generalization

On 02.05.2017 21:26, Robert Haas wrote:

I am sympathetic to the fact that this is a hard problem to solve.
I'm just telling you that the way you've got it is not acceptable and
nobody's going to commit it like that (or if they do, they will end up
having to revert it). If you want to have a technical discussion
about what might be a way to change the patch to be more acceptable,
cool, but I don't want to get into a long debate about whether what
you have is acceptable or not; I've already said what I think about
that and I believe that opinion will be widely shared. I am not
trying to beat you up here, just trying to be clear.

Based on the Robert's feedback and Tom's proposal I have implemented two
new versions of autoprepare patch.

First version is just refactoring of my original implementation: I have
extracted common code into prepare_cached_plan and exec_prepared_plan
function to avoid code duplication. Also I rewrote assignment of values
to parameters. Now types of parameters are inferred from types of
literals, so there may be several
prepared plans which are different only by types of parameters. Due to
the problem with type coercion for parameters, I have to catch errors in
parse_analyze_varparams.

Robert, can you please explain why using TRY/CATCH is not safe here:

This is definitely not a safe way of using TRY/CATCH.

Second version is based on Tom's suggestion:

Personally I'd think about
replacing the entire literal-with-cast construct with a Param having
already-known type.

So here I first patch raw parse tree, replacing A_Const with ParamRef.
Such plan is needed to perform cache lookup.
Then I restore original raw parse tree and call pg_analyze_and_rewrite.
Then I mutate analyzed tree, replacing Const with Param nodes.
In this case type coercion is already performed and I know precise types
which should be used for parameters.
It seems to be more sophisticated approach. And I can not extract common
code in prepare_cached_plan,
because preparing of plan is currently mix of steps done in
exec_simple_query and exec_parse_message.
But there is no need to catch analyze errors.

Finally performance of both approaches is the same: at pgbench it is
180k TPS on read-only queries, comparing with 80k TPS for not prepared
queries.
In both cases 7 out of 177 regression tests are not passed (mostly
because session environment is changed between subsequent query execution).

I am going to continue work on this patch I will be glad to receive any
feedback and suggestions for its improvement.
In most cases, applications are not accessing Postgres directly, but
using some connection pooling layer and so them are not able to use
prepared statements.
But at simple OLTP Postgres spent more time on building query plan than
on execution itself. And it is possible to speedup Postgres about two
times at such workload!
Another alternative is true shared plan cache. May be it is even more
perspective approach, but definitely much more invasive and harder to
implement.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-1.patchtext/x-patch; name=autoprepare-1.patchDownload
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 6e52eb7..f2eb0f5 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3696,6 +3696,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree. 
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ * 
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ * 
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 3055b48..0f7aaac 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -42,11 +42,13 @@
 #include "catalog/pg_type.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
+#include "commands/defrem.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
 #include "nodes/print.h"
+#include "nodes/nodeFuncs.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
 #include "pg_trace.h"
@@ -73,6 +75,7 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -188,7 +191,16 @@ static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void SigHupHandler(SIGNAL_ARGS);
 static void log_disconnections(int code, Datum arg);
-
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
+static CachedPlanSource *prepare_cached_plan(const char *query_string,/* string to execute */
+											 const char *stmt_name,	 /* name for prepared stmt */
+											 Oid *paramTypes,		 /* parameter types */
+											 int numParams,           /* number of parameters */								
+											 List *parsetree_list,    /* raw parse tree */
+											 MemoryContext unnamed_stmt_context, /* memory context for unnamed statements */
+											 bool noerror, 
+											 bool* requires_snapshot);
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -958,6 +970,14 @@ exec_simple_query(const char *query_string)
 	isTopLevel = (list_length(parsetree_list) == 1);
 
 	/*
+	 * Try to find cached plan
+	 */
+	if (isTopLevel && autoprepare_threshold != 0 && exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1202,12 +1222,9 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	MemoryContext unnamed_stmt_context = NULL;
 	MemoryContext oldcontext;
 	List	   *parsetree_list;
-	RawStmt    *raw_parse_tree;
-	const char *commandTag;
-	List	   *querytree_list;
-	CachedPlanSource *psrc;
 	bool		is_named;
 	bool		save_log_statement_stats = log_statement_stats;
+	bool        requires_snapshot;
 	char		msec_str[32];
 
 	/*
@@ -1281,144 +1298,14 @@ exec_parse_message(const char *query_string,	/* string to execute */
 				(errcode(ERRCODE_SYNTAX_ERROR),
 		errmsg("cannot insert multiple commands into a prepared statement")));
 
-	if (parsetree_list != NIL)
-	{
-		Query	   *query;
-		bool		snapshot_set = false;
-		int			i;
-
-		raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
-
-		/*
-		 * Get the command name for possible use in status display.
-		 */
-		commandTag = CreateCommandTag(raw_parse_tree->stmt);
-
-		/*
-		 * If we are in an aborted transaction, reject all commands except
-		 * COMMIT/ROLLBACK.  It is important that this test occur before we
-		 * try to do parse analysis, rewrite, or planning, since all those
-		 * phases try to do database accesses, which may fail in abort state.
-		 * (It might be safe to allow some additional utility commands in this
-		 * state, but not many...)
-		 */
-		if (IsAbortedTransactionBlockState() &&
-			!IsTransactionExitStmt(raw_parse_tree->stmt))
-			ereport(ERROR,
-					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
-					 errmsg("current transaction is aborted, "
-						  "commands ignored until end of transaction block"),
-					 errdetail_abort()));
-
-		/*
-		 * Create the CachedPlanSource before we do parse analysis, since it
-		 * needs to see the unmodified raw parse tree.
-		 */
-		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
-
-		/*
-		 * Set up a snapshot if parse analysis will need one.
-		 */
-		if (analyze_requires_snapshot(raw_parse_tree))
-		{
-			PushActiveSnapshot(GetTransactionSnapshot());
-			snapshot_set = true;
-		}
-
-		/*
-		 * Analyze and rewrite the query.  Note that the originally specified
-		 * parameter set is not required to be complete, so we have to use
-		 * parse_analyze_varparams().
-		 */
-		if (log_parser_stats)
-			ResetUsage();
-
-		query = parse_analyze_varparams(raw_parse_tree,
-										query_string,
-										&paramTypes,
-										&numParams);
-
-		/*
-		 * Check all parameter types got determined.
-		 */
-		for (i = 0; i < numParams; i++)
-		{
-			Oid			ptype = paramTypes[i];
-
-			if (ptype == InvalidOid || ptype == UNKNOWNOID)
-				ereport(ERROR,
-						(errcode(ERRCODE_INDETERMINATE_DATATYPE),
-					 errmsg("could not determine data type of parameter $%d",
-							i + 1)));
-		}
-
-		if (log_parser_stats)
-			ShowUsage("PARSE ANALYSIS STATISTICS");
-
-		querytree_list = pg_rewrite_query(query);
-
-		/* Done with the snapshot used for parsing */
-		if (snapshot_set)
-			PopActiveSnapshot();
-	}
-	else
-	{
-		/* Empty input string.  This is legal. */
-		raw_parse_tree = NULL;
-		commandTag = NULL;
-		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
-		querytree_list = NIL;
-	}
-
-	/*
-	 * CachedPlanSource must be a direct child of MessageContext before we
-	 * reparent unnamed_stmt_context under it, else we have a disconnected
-	 * circular subgraph.  Klugy, but less so than flipping contexts even more
-	 * above.
+    /*
+	 * Create cached plan, analyze parameters and save cached plan
 	 */
-	if (unnamed_stmt_context)
-		MemoryContextSetParent(psrc->context, MessageContext);
-
-	/* Finish filling in the CachedPlanSource */
-	CompleteCachedPlan(psrc,
-					   querytree_list,
-					   unnamed_stmt_context,
-					   paramTypes,
-					   numParams,
-					   NULL,
-					   NULL,
-					   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
-					   true);	/* fixed result */
-
-	/* If we got a cancel signal during analysis, quit */
-	CHECK_FOR_INTERRUPTS();
-
-	if (is_named)
-	{
-		/*
-		 * Store the query as a prepared statement.
-		 */
-		StorePreparedStatement(stmt_name, psrc, false);
-	}
-	else
-	{
-		/*
-		 * We just save the CachedPlanSource into unnamed_stmt_psrc.
-		 */
-		SaveCachedPlan(psrc);
-		unnamed_stmt_psrc = psrc;
-	}
+	prepare_cached_plan(query_string, stmt_name, paramTypes, numParams, parsetree_list, unnamed_stmt_context, false, &requires_snapshot);
 
 	MemoryContextSwitchTo(oldcontext);
 
 	/*
-	 * We do NOT close the open transaction command here; that only happens
-	 * when the client sends Sync.  Instead, do CommandCounterIncrement just
-	 * in case something happened during parse/plan.
-	 */
-	CommandCounterIncrement();
-
-	/*
 	 * Send ParseComplete.
 	 */
 	if (whereToSendOutput == DestRemote)
@@ -1841,9 +1728,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void 
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1855,17 +1761,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -1928,7 +1823,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4507,3 +4402,815 @@ log_disconnections(int code, Datum arg)
 					port->user_name, port->database_name, port->remote_host,
 				  port->remote_port[0] ? " port=" : "", port->remote_port)));
 }
+
+/*
+ * Create cached plan, analyze parameters and save cached plan
+ */
+static CachedPlanSource *prepare_cached_plan(const char *query_string,/* string to execute */
+											 const char *stmt_name,	  /* name for prepared stmt */
+											 Oid *paramTypes,		  /* parameter types */
+											 int numParams,           /* number of parameters */								
+											 List *parsetree_list,    /* raw parse tree */
+											 MemoryContext unnamed_stmt_context, /* memory context for unnamed statements */
+											 bool noerror,            /* do not report error in case of unresolved parameters */
+											 bool* requires_snapshot) /* parse analysis requires snapshot */
+{
+	CachedPlanSource *psrc;
+	RawStmt    *raw_parse_tree;
+	const char *commandTag;
+	List	   *querytree_list;
+
+	*requires_snapshot = false;
+
+	if (parsetree_list != NIL)
+	{
+		Query	   *query;
+		bool		snapshot_set = false;
+		int			i;
+
+		raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+		
+		/*
+		 * Get the command name for possible use in status display.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ROLLBACK.  It is important that this test occur before we
+		 * try to do parse analysis, rewrite, or planning, since all those
+		 * phases try to do database accesses, which may fail in abort state.
+		 * (It might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Set up a snapshot if parse analysis will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			*requires_snapshot = snapshot_set = true;
+		}
+
+		/*
+		 * Analyze and rewrite the query.  Note that the originally specified
+		 * parameter set is not required to be complete, so we have to use
+		 * parse_analyze_varparams().
+		 */
+		if (log_parser_stats)
+			ResetUsage();
+
+		query = parse_analyze_varparams(raw_parse_tree,
+										query_string,
+										&paramTypes,
+										&numParams);
+
+		/*
+		 * Check all parameter types got determined.
+		 */
+		for (i = 0; i < numParams; i++)
+		{
+			Oid			ptype = paramTypes[i];
+
+			if (ptype == InvalidOid || ptype == UNKNOWNOID)
+			{
+				if (noerror) 
+				{
+					if (snapshot_set)
+						PopActiveSnapshot();
+					return NULL;
+				}
+				else
+				{
+					ereport(ERROR,
+							(errcode(ERRCODE_INDETERMINATE_DATATYPE),
+							 errmsg("could not determine data type of parameter $%d",
+									i + 1)));
+				}
+			}
+		}
+
+		if (log_parser_stats)
+			ShowUsage("PARSE ANALYSIS STATISTICS");
+
+		querytree_list = pg_rewrite_query(query);
+
+		/* Done with the snapshot used for parsing */
+		if (snapshot_set)
+			PopActiveSnapshot();
+	}
+	else
+	{
+		/* Empty input string.  This is legal. */
+		raw_parse_tree = NULL;
+		commandTag = NULL;
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+		querytree_list = NIL;
+	}
+
+	/*
+	 * CachedPlanSource must be a direct child of MessageContext before we
+	 * reparent unnamed_stmt_context under it, else we have a disconnected
+	 * circular subgraph.  Klugy, but less so than flipping contexts even more
+	 * above.
+	 */
+	if (unnamed_stmt_context)
+		MemoryContextSetParent(psrc->context, MessageContext);
+
+	/* Finish filling in the CachedPlanSource */
+	CompleteCachedPlan(psrc,
+					   querytree_list,
+					   unnamed_stmt_context,
+					   paramTypes,
+					   numParams,
+					   NULL,
+					   NULL,
+					   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+					   true);	/* fixed result */
+
+	/* If we got a cancel signal during analysis, quit */
+	CHECK_FOR_INTERRUPTS();
+
+	if (stmt_name[0] != '\0')
+	{
+		/*
+		 * Store the query as a prepared statement.
+		 */
+		StorePreparedStatement(stmt_name, psrc, false);
+	}
+	else
+	{
+		/*
+		 * We just save the CachedPlanSource into unnamed_stmt_psrc.
+		 */
+		SaveCachedPlan(psrc);
+		unnamed_stmt_psrc = psrc;
+	}
+
+	/*
+	 * We do NOT close the open transaction command here; that only happens
+	 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+	 * in case something happened during parse/plan.
+	 */
+	CommandCounterIncrement();
+
+	return psrc;
+}
+
+/*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding 
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 param;	  /* Constructed parameter reference */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 type;	  /* Parameter type */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct 
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+	
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+static void* plan_cache_keycopy_fn(void *dest, const void *src, Size keysize)
+{
+	plan_cache_entry* dst_entry = (plan_cache_entry*)dest;
+	plan_cache_entry* src_entry = (plan_cache_entry*)src;
+	dst_entry->parse_tree = copyObject(src_entry->parse_tree);
+	dst_entry->param_types = palloc(src_entry->n_params*sizeof(Oid));
+	dst_entry->n_params = src_entry->n_params;
+	memcpy(dst_entry->param_types, src_entry->param_types, src_entry->n_params*sizeof(Oid));
+	dst_entry->hash = src_entry->hash;
+	return dest;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID; 
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+		
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer) 
+	{	
+		/* 
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	} 
+	else 
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+		
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+	
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+query_plan_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (query_plan_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (query_plan_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;		  
+			if (literal->val.type != T_Null) 
+			{
+				/* 
+				 * Do not substitute null literals with parameters
+				 */				
+				ParamBinding* cp = palloc(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->param = param;
+				cp->literal = literal;
+				cp->type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (query_plan_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (query_plan_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (query_plan_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (query_plan_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (query_plan_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (query_plan_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, query_plan_generalizer, context);
+	}
+	return false;
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+	autoprepare_misses += 1;
+}
+
+/*
+ * Callback for raw_expression_tree_walker dropping parse tree
+ */
+static bool drop_tree_node(Node* node, void* context)
+{
+	if (node) {
+		raw_expression_tree_walker(node, drop_tree_node, NULL);
+		pfree(node);
+	}
+	return false;
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	static HTAB*	  plan_cache; /* hash table for plan cache */
+	static dlist_head lru;		  /* LRU L2-list for cached queries */
+	static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (query_plan_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);	
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next) 
+	{ 
+		param_types[paramno] = binding->type;
+	}
+	
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL) 
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		info.keycopy = plan_cache_keycopy_fn;
+		plan_cache = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+								 &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);
+		dlist_init(&lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		if (++autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, lru.head.prev);
+			Node* dropped_tree = victim->parse_tree;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				DropCachedPlan(victim->plan);
+			}
+			pfree(victim->param_types);
+			hash_search(plan_cache, victim, HASH_REMOVE, NULL);
+			raw_expression_tree_walker(dropped_tree, drop_tree_node, NULL);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid) 
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+	/*
+	 * Prepare query only when it is executed more than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || entry->exec_count++ < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		return false;
+	}
+	if (entry->plan == NULL)
+	{
+		bool requires_snapshot;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Create cached plan, analyze parameters and save cached plan
+		 */
+		PG_TRY();
+		{
+			psrc = prepare_cached_plan(query_string, "", param_types, n_params, parsetree_list, NULL, true, &requires_snapshot);
+		}
+		PG_CATCH();
+		{
+			/*
+			 * In case of analyze errors revert back to original query processing
+			 * and disable autoprepare for this query to avoid such problems in future.
+			 */			
+			FlushErrorState();
+			if (requires_snapshot) 
+			{
+				PopActiveSnapshot();
+			}
+			psrc = NULL;
+		}
+		PG_END_TRY();
+		MemoryContextSwitchTo(old_context);
+
+		if (psrc == NULL) 
+		{
+			/* Failed to resolve parameter type */
+			undo_query_plan_changes(binding_list);
+			entry->disable_autoprepare = true;
+			return false;
+		}
+	
+		unnamed_stmt_psrc = NULL; /* prevent destruction of prepared plan by drop_unnamed_stmt */
+		entry->plan = psrc;
+		
+		/* 
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	} 
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set) 
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/* 
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 8b5f064..5ab83f0 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -476,6 +476,10 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -1960,6 +1964,28 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+	    113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index b6c9b48..1388822 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -73,8 +73,11 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 												   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+												   void *context);
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 											  void *context);
 
+
 #endif   /* NODEFUNCS_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 7dd3780..9cd0917 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -251,6 +251,9 @@ extern int	client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
autoprepare-2.patchtext/x-patch; name=autoprepare-2.patchDownload
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 6e52eb7..5538c7c 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3696,6 +3696,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree. 
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ * 
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ * 
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink    *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr    *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+		    return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 3055b48..a36ac84 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -42,11 +42,13 @@
 #include "catalog/pg_type.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
+#include "commands/defrem.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
 #include "nodes/print.h"
+#include "nodes/nodeFuncs.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
 #include "pg_trace.h"
@@ -73,6 +75,7 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -188,7 +191,8 @@ static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void SigHupHandler(SIGNAL_ARGS);
 static void log_disconnections(int code, Datum arg);
-
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -958,6 +962,14 @@ exec_simple_query(const char *query_string)
 	isTopLevel = (list_length(parsetree_list) == 1);
 
 	/*
+	 * Try to find cached plan
+	 */
+	if (isTopLevel && autoprepare_threshold != 0 && exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1841,9 +1853,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void 
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1855,17 +1886,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -1928,7 +1948,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4507,3 +4527,768 @@ log_disconnections(int code, Datum arg)
 					port->user_name, port->database_name, port->remote_host,
 				  port->remote_port[0] ? " port=" : "", port->remote_port)));
 }
+
+/*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+static void* plan_cache_keycopy_fn(void *dest, const void *src, Size keysize)
+{
+	plan_cache_entry* dst_entry = (plan_cache_entry*)dest;
+	plan_cache_entry* src_entry = (plan_cache_entry*)src;
+	dst_entry->parse_tree = copyObject(src_entry->parse_tree);
+	dst_entry->param_types = palloc(src_entry->n_params*sizeof(Oid));
+	dst_entry->n_params = src_entry->n_params;
+	memcpy(dst_entry->param_types, src_entry->param_types, src_entry->n_params*sizeof(Oid));
+	dst_entry->hash = src_entry->hash;
+	return dest;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Callback for raw_expression_tree_walker dropping parse tree
+ */
+static bool drop_tree_node(Node* node, void* context)
+{
+	if (node) {
+		raw_expression_tree_walker(node, drop_tree_node, NULL);
+		pfree(node);
+	}
+	return false;
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	static HTAB*	  plan_cache; /* hash table for plan cache */
+	static dlist_head lru;		  /* LRU L2-list for cached queries */
+	static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		info.keycopy = plan_cache_keycopy_fn;
+		plan_cache = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+								 &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);
+		dlist_init(&lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		if (++autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, lru.head.prev);
+			Node* dropped_tree = victim->parse_tree;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				DropCachedPlan(victim->plan);
+			}
+			pfree(victim->param_types);
+			hash_search(plan_cache, victim, HASH_REMOVE, NULL);
+			raw_expression_tree_walker(dropped_tree, drop_tree_node, NULL);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+	/*
+	 * Prepare query only when it is executed more than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || entry->exec_count++ < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+		Query      *query;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 8b5f064..5ab83f0 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -476,6 +476,10 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -1960,6 +1964,28 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+	    113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index b6c9b48..1388822 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -73,8 +73,11 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 												   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+												   void *context);
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 											  void *context);
 
+
 #endif   /* NODEFUNCS_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 7dd3780..9cd0917 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -251,6 +251,9 @@ extern int	client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
#36Bruce Momjian
bruce@momjian.us
In reply to: Konstantin Knizhnik (#35)
Re: Cached plans and statement generalization

On Wed, May 10, 2017 at 07:11:07PM +0300, Konstantin Knizhnik wrote:

I am going to continue work on this patch I will be glad to receive any
feedback and suggestions for its improvement.
In most cases, applications are not accessing Postgres directly, but using
some connection pooling layer and so them are not able to use prepared
statements.
But at simple OLTP Postgres spent more time on building query plan than on
execution itself. And it is possible to speedup Postgres about two times at
such workload!
Another alternative is true shared plan cache. May be it is even more
perspective approach, but definitely much more invasive and harder to
implement.

Can we back up and get an overview of what you are doing and how you are
doing it? Our TODO list suggests this order for successful patches:

Desirability -> Design -> Implement -> Test -> Review -> Commit

You kind of started at the Implementation/patch level, which makes it
hard to evaluate.

I think everyone agrees on the Desirability of the feature, but the
Design is the tricky part. I think the design questions are:

* What information is stored about cached plans?
* How are the cached plans invalidated?
* How is a query matched against a cached plan?

Looking at the options, ideally the plan would be cached at the same
query stage as the stage where the incoming query is checked against the
cache. However, caching and checking at the same level offers no
benefit, so they are going to be different. For example, caching a
parse tree at the time it is created, then checking at the same point if
the incoming query is the same doesn't help you because you already had
to create the parse tree get to that point.

A more concrete example is prepared statements. They are stored at the
end of planning and matched in the parser. However, you can easily do
that since the incoming query specifies the name of the prepared query,
so there is no trick to matching.

The desire is to cache as late as possible so you cache more work and
you have more detail about the referenced objects, which helps with
cache invalidation. However, you also want to do cache matching as
early as possible to improve performance.

So, let's look at some options. One interesting idea from Doug Doole
was to do it between the tokenizer and parser. I think they are glued
together so you would need a way to run the tokenizer separately and
compare that to the tokens you stored for the cached plan. The larger
issue is that prepared plans already are checked after parsing, and we
know they are a win, so matching any earlier than that just seems like
overkill and likely to lead to lots of problems.

So, you could do it after parsing but before parse-analysis, which is
kind of what prepared queries do. One tricky problem is that we don't
bind the query string tokens to database objects until after parse
analysis.

Doing matching before parse-analysis is going to be tricky, which is why
there are so many comments about the approach. Changing search_path can
certainly affect it, but creating objects in earlier-mentioned schemas
can also change how an object reference in a query is resolved. Even
obscure things like the creation of a new operator that has higher
precedence in the query could change the plan, though am not sure if
our prepared query system even handles that properly.

Anyway, that is my feedback. I would like to get an overview of what
you are trying to do and the costs/benefits of each option so we can
best guide you.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#37Douglas Doole
dougdoole@gmail.com
In reply to: Bruce Momjian (#36)
Re: Cached plans and statement generalization

One interesting idea from Doug Doole was to do it between the tokenizer
and parser. I think they are glued together so you would need a way to run
the tokenizer separately and compare that to the tokens you stored for the
cached plan.

When I did this, we had the same problem that the tokenizer and parser were
tightly coupled. Fortunately, I was able to do as you suggest and run the
tokenizer separately to do my analysis.

So my model was to do statement generalization before entering the compiler
at all. I would tokenize the statement to find the literals and generate a
new statement string with placeholders. The new string would the be passed
to the compiler which would then tokenize and parse the reworked statement.

This means we incurred the cost of tokenizing twice, but the tokenizer was
lightweight enough that it wasn't a problem. In exchange I was able to do
statement generalization without touching the compiler - the compiler saw
the generalized statement text as any other statement and handled it in the
exact same way. (There was just a bit of new code around variable binding.)

#38Bruce Momjian
bruce@momjian.us
In reply to: Douglas Doole (#37)
Re: Cached plans and statement generalization

On Thu, May 11, 2017 at 05:39:58PM +0000, Douglas Doole wrote:

One interesting idea from Doug Doole was to do it between the tokenizer and
parser.� I think they are glued together so you would need a way to run the
tokenizer separately and compare that to the tokens you stored for the
cached plan.

When I did this, we had the same problem that the tokenizer and parser were
tightly coupled. Fortunately, I was able to do as you suggest and run the
tokenizer separately to do my analysis.�

So my model was to do statement generalization before entering the compiler at
all. I would tokenize the statement to find the literals and generate a new
statement string with placeholders. The new string would the be passed to the
compiler which would then tokenize and parse the reworked statement.

This means we incurred the cost of tokenizing twice, but the tokenizer was
lightweight enough that it wasn't a problem. In exchange I was able to do
statement generalization without touching the compiler - the compiler saw the
generalized statement text as any other statement and handled it in the exact
same way. (There was just a bit of new code around variable binding.)

Good point. I think we need to do some measurements to see if the
parser-only stage is actually significant. I have a hunch that
commercial databases have much heavier parsers than we do.

This split would also not work if the scanner feeds changes back into
the parser. I know C does that for typedefs but I don't think we do.

Ideally I would like to see percentage-of-execution numbers for typical
queries for scan, parse, parse-analysis, plan, and execute to see where
the wins are.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#39Tom Lane
tgl@sss.pgh.pa.us
In reply to: Bruce Momjian (#38)
Re: Cached plans and statement generalization

Bruce Momjian <bruce@momjian.us> writes:

Good point. I think we need to do some measurements to see if the
parser-only stage is actually significant. I have a hunch that
commercial databases have much heavier parsers than we do.

FWIW, gram.y does show up as significant in many of the profiles I take.
I speculate that this is not so much that it eats many CPU cycles, as that
the constant tables are so large as to incur lots of cache misses. scan.l
is not quite as big a deal for some reason, even though it's also large.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#40Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#39)
Re: Cached plans and statement generalization

On May 11, 2017 11:31:02 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Bruce Momjian <bruce@momjian.us> writes:

Good point. I think we need to do some measurements to see if the
parser-only stage is actually significant. I have a hunch that
commercial databases have much heavier parsers than we do.

FWIW, gram.y does show up as significant in many of the profiles I
take.
I speculate that this is not so much that it eats many CPU cycles, as
that
the constant tables are so large as to incur lots of cache misses.
scan.l
is not quite as big a deal for some reason, even though it's also
large.

And that there's a lot of unpredictable branches, leading to a lot if pipeline stalls.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#41Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Bruce Momjian (#36)
Re: Cached plans and statement generalization

On 05/11/2017 06:12 PM, Bruce Momjian wrote:

On Wed, May 10, 2017 at 07:11:07PM +0300, Konstantin Knizhnik wrote:

I am going to continue work on this patch I will be glad to receive any
feedback and suggestions for its improvement.
In most cases, applications are not accessing Postgres directly, but using
some connection pooling layer and so them are not able to use prepared
statements.
But at simple OLTP Postgres spent more time on building query plan than on
execution itself. And it is possible to speedup Postgres about two times at
such workload!
Another alternative is true shared plan cache. May be it is even more
perspective approach, but definitely much more invasive and harder to
implement.

Can we back up and get an overview of what you are doing and how you are
doing it? Our TODO list suggests this order for successful patches:

Desirability -> Design -> Implement -> Test -> Review -> Commit

You kind of started at the Implementation/patch level, which makes it
hard to evaluate.

I think everyone agrees on the Desirability of the feature, but the
Design is the tricky part. I think the design questions are:

* What information is stored about cached plans?
* How are the cached plans invalidated?
* How is a query matched against a cached plan?

Looking at the options, ideally the plan would be cached at the same
query stage as the stage where the incoming query is checked against the
cache. However, caching and checking at the same level offers no
benefit, so they are going to be different. For example, caching a
parse tree at the time it is created, then checking at the same point if
the incoming query is the same doesn't help you because you already had
to create the parse tree get to that point.

A more concrete example is prepared statements. They are stored at the
end of planning and matched in the parser. However, you can easily do
that since the incoming query specifies the name of the prepared query,
so there is no trick to matching.

The desire is to cache as late as possible so you cache more work and
you have more detail about the referenced objects, which helps with
cache invalidation. However, you also want to do cache matching as
early as possible to improve performance.

So, let's look at some options. One interesting idea from Doug Doole
was to do it between the tokenizer and parser. I think they are glued
together so you would need a way to run the tokenizer separately and
compare that to the tokens you stored for the cached plan. The larger
issue is that prepared plans already are checked after parsing, and we
know they are a win, so matching any earlier than that just seems like
overkill and likely to lead to lots of problems.

So, you could do it after parsing but before parse-analysis, which is
kind of what prepared queries do. One tricky problem is that we don't
bind the query string tokens to database objects until after parse
analysis.

Doing matching before parse-analysis is going to be tricky, which is why
there are so many comments about the approach. Changing search_path can
certainly affect it, but creating objects in earlier-mentioned schemas
can also change how an object reference in a query is resolved. Even
obscure things like the creation of a new operator that has higher
precedence in the query could change the plan, though am not sure if
our prepared query system even handles that properly.

Anyway, that is my feedback. I would like to get an overview of what
you are trying to do and the costs/benefits of each option so we can
best guide you.

Sorry, for luck of overview.
I have started with small prototype just to investigate if such optimization makes sense or not.
When I get more than two time advantage in performance on standard pgbench, I come to conclusion that this
optimization can be really very useful and now try to find the best way of its implementation.

I have started with simplest approach when string literals are replaced with parameters. It is done before parsing.
And can be done very fast - just need to locate data in quotes.
But this approach is not safe and universal: you will not be able to speedup most of the existed queries without rewriting them.

This is why I have provided second implementation which replace literals with parameters after raw parsing.
Certainly it is slower than first approach. But still provide significant advantage in performance: more than two times at pgbench.
Then I tried to run regression tests and find several situations where type analysis is not correctly performed in case of replacing literals with parameters.

So my third attempt is to replace constant nodes with parameters in already analyzed tree.

Now answering your questions:

* What information is stored about cached plans?

Key to locate cached plan is raw parse tree. Value is saved CachedPlanSource.

* How are the cached plans invalidated?

In the same way as plans for explicitly prepared statements.

* How is a query matched against a cached plan?

Hash function is calculated for raw parse tree and equal() function is used to compare raw plans with literal nodes replaced with parameters.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#42Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Tom Lane (#39)
Re: Cached plans and statement generalization

On 05/11/2017 09:31 PM, Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

Good point. I think we need to do some measurements to see if the
parser-only stage is actually significant. I have a hunch that
commercial databases have much heavier parsers than we do.

FWIW, gram.y does show up as significant in many of the profiles I take.
I speculate that this is not so much that it eats many CPU cycles, as that
the constant tables are so large as to incur lots of cache misses. scan.l
is not quite as big a deal for some reason, even though it's also large.

regards, tom lane

Yes, my results shows that pg_parse_query adds not so much overhead:
206k TPS for my first variant with string literal substitution and modified query text used as hash key vs.
181k. TPS for version with patching raw parse tree constructed by pg_parse_query.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#43Andres Freund
andres@anarazel.de
In reply to: Konstantin Knizhnik (#42)
Re: Cached plans and statement generalization

On 2017-05-11 22:48:26 +0300, Konstantin Knizhnik wrote:

On 05/11/2017 09:31 PM, Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

Good point. I think we need to do some measurements to see if the
parser-only stage is actually significant. I have a hunch that
commercial databases have much heavier parsers than we do.

FWIW, gram.y does show up as significant in many of the profiles I take.
I speculate that this is not so much that it eats many CPU cycles, as that
the constant tables are so large as to incur lots of cache misses. scan.l
is not quite as big a deal for some reason, even though it's also large.

regards, tom lane

Yes, my results shows that pg_parse_query adds not so much overhead:
206k TPS for my first variant with string literal substitution and modified query text used as hash key vs.
181k. TPS for version with patching raw parse tree constructed by pg_parse_query.

Those numbers and your statement seem to contradict each other?

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#44Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Andres Freund (#43)
1 attachment(s)
Re: Cached plans and statement generalization

On 05/11/2017 10:52 PM, Andres Freund wrote:

On 2017-05-11 22:48:26 +0300, Konstantin Knizhnik wrote:

On 05/11/2017 09:31 PM, Tom Lane wrote:

Bruce Momjian <bruce@momjian.us> writes:

Good point. I think we need to do some measurements to see if the
parser-only stage is actually significant. I have a hunch that
commercial databases have much heavier parsers than we do.

FWIW, gram.y does show up as significant in many of the profiles I take.
I speculate that this is not so much that it eats many CPU cycles, as that
the constant tables are so large as to incur lots of cache misses. scan.l
is not quite as big a deal for some reason, even though it's also large.

regards, tom lane

Yes, my results shows that pg_parse_query adds not so much overhead:
206k TPS for my first variant with string literal substitution and modified query text used as hash key vs.
181k. TPS for version with patching raw parse tree constructed by pg_parse_query.

Those numbers and your statement seem to contradict each other?

Oops, my parse error:( I incorrectly read Tom's statement.
Actually, I also was afraid that price of parsing is large enough and this is why my first attempt was to avoid parsing.
But then I find out that most of the time is spent in analyze and planning (see attached profile):

pg_parse_query: 4.23%
pg_analyze_and_rewrite 8.45%
pg_plan_queries: 15.49%

- Andres

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

pgbench.svgimage/svg+xml; name=pgbench.svgDownload
#45Bruce Momjian
bruce@momjian.us
In reply to: Konstantin Knizhnik (#41)
Re: Cached plans and statement generalization

On Thu, May 11, 2017 at 10:41:45PM +0300, Konstantin Knizhnik wrote:

This is why I have provided second implementation which replace
literals with parameters after raw parsing. Certainly it is slower
than first approach. But still provide significant advantage in
performance: more than two times at pgbench. Then I tried to run
regression tests and find several situations where type analysis is
not correctly performed in case of replacing literals with parameters.

So the issue is that per-command output from the parser, SelectStmt,
only has strings for identifers, e.g. table and column names, so you
can't be sure it is the same as the cached entry you matched. I suppose
if you cleared the cache every time someone created an object or changed
search_path, it might work.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#46Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Bruce Momjian (#45)
Re: Cached plans and statement generalization

On 12.05.2017 03:58, Bruce Momjian wrote:

On Thu, May 11, 2017 at 10:41:45PM +0300, Konstantin Knizhnik wrote:

This is why I have provided second implementation which replace
literals with parameters after raw parsing. Certainly it is slower
than first approach. But still provide significant advantage in
performance: more than two times at pgbench. Then I tried to run
regression tests and find several situations where type analysis is
not correctly performed in case of replacing literals with parameters.

So the issue is that per-command output from the parser, SelectStmt,
only has strings for identifers, e.g. table and column names, so you
can't be sure it is the same as the cached entry you matched. I suppose
if you cleared the cache every time someone created an object or changed
search_path, it might work.

Definitely changing session context (search_path, date/time format, ...)
may cause incorrect behavior of cached statements.
Actually you may get the same problem with explicitly prepared
statements (certainly, in the last case, you better understand what
going on and it is your choice whether to use or not to use prepared
statement).

The fact of failure of 7 regression tests means that autoprepare can
really change behavior of existed program. This is why my suggestion is
to switch off this feature by default.
But in 99.9% real cases (my estimation plucked out of thin air:) there
will be no such problems with autoprepare. And it can significantly
improve performance of OLTP applications
which are not able to use prepared statements (because of working
through pgbouncer or any other reasons).

Can autoprepare slow down the system?
Yes, it can. It can happen if application perform larger number of
unique queries and autoprepare cache size is not limited.
In this case large (and infinitely growing) number of stored plans can
consume a lot of memory and, what is even worse, slowdown cache lookup.
This is why I by default limit number of cached statements
(autoprepare_limit parameter) by 100.

I am almost sure that there will be some other issues with autoprepare
which I have not encountered yet (because I mostly tested it on pgbench
and Postgres regression tests).
But I am also sure that benefit of doubling system performance is good
motivation to continue work in this direction.

My main concern is whether to continue to improve current approach with
local (per-backend) cache of prepared statements.
Or create shared cache (as in Oracle). It is much more difficult to
implement shared cache (the same problem with session context, different
catalog snapshots, cache invalidation,...)
but it also provides more opportunities for queries optimization and
tuning.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#47Bruce Momjian
bruce@momjian.us
In reply to: Konstantin Knizhnik (#46)
Re: Cached plans and statement generalization

On Fri, May 12, 2017 at 10:50:41AM +0300, Konstantin Knizhnik wrote:

Definitely changing session context (search_path, date/time format, ...) may
cause incorrect behavior of cached statements.

I wonder if we should clear the cache whenever any SET command is
issued.

Actually you may get the same problem with explicitly prepared statements
(certainly, in the last case, you better understand what going on and it is
your choice whether to use or not to use prepared statement).

The fact of failure of 7 regression tests means that autoprepare can really
change behavior of existed program. This is why my suggestion is to switch
off this feature by default.

I would like to see us target something that can be enabled by default.
Even if it only improves performance by 5%, it would be better overall
than a feature that improves performance by 90% but is only used by 1%
of our users.

But in 99.9% real cases (my estimation plucked out of thin air:) there will
be no such problems with autoprepare. And it can significantly improve
performance of OLTP applications
which are not able to use prepared statements (because of working through
pgbouncer or any other reasons).

Right, but we can't ship something that is 99.9% accurate when the
inaccuracy is indeterminate. The bottom line is that Postgres has a
very high bar, and I realize you are just prototyping at this point, but
the final product is going to have to address all the intricacies of the
issue for it to be added.

Can autoprepare slow down the system?
Yes, it can. It can happen if application perform larger number of unique
queries and autoprepare cache size is not limited.
In this case large (and infinitely growing) number of stored plans can
consume a lot of memory and, what is even worse, slowdown cache lookup.
This is why I by default limit number of cached statements
(autoprepare_limit parameter) by 100.

Yes, good idea.

I am almost sure that there will be some other issues with autoprepare which
I have not encountered yet (because I mostly tested it on pgbench and
Postgres regression tests).
But I am also sure that benefit of doubling system performance is good
motivation to continue work in this direction.

Right, you are still testing to see where the edges are.

My main concern is whether to continue to improve current approach with
local (per-backend) cache of prepared statements.
Or create shared cache (as in Oracle). It is much more difficult to
implement shared cache (the same problem with session context, different
catalog snapshots, cache invalidation,...)
but it also provides more opportunities for queries optimization and tuning.

I would continue in the per-backend cache direction at this point
because we don't even have that solved yet. The global cache is going
to be even more complex.

Ultimately I think we are going to want global and local caches because
the plans of the local cache are much more likely to be accurate.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#48Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Bruce Momjian (#47)
Re: Cached plans and statement generalization

On 12.05.2017 18:23, Bruce Momjian wrote:

On Fri, May 12, 2017 at 10:50:41AM +0300, Konstantin Knizhnik wrote:

Definitely changing session context (search_path, date/time format, ...) may
cause incorrect behavior of cached statements.

I wonder if we should clear the cache whenever any SET command is
issued.

Well, with autoprepare cache disabled on each set variable, alter system
and any slow utility statement
only one regression test is not passed. And only because of different
error message context:

*** /home/knizhnik/postgresql.master/src/test/regress/expected/date.out 
2017-04-11 18:07:56.497461208 +0300
--- /home/knizhnik/postgresql.master/src/test/regress/results/date.out 
2017-05-12 20:21:19.767566302 +0300
***************
*** 1443,1452 ****
   --
   SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR: 
timestamp units "microsec" not recognized
   ERROR:  timestamp units "microsec" not recognized
- CONTEXT:  SQL function "date_part" statement 1
   SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR: 
timestamp units "undefined" not supported
   ERROR:  timestamp units "undefined" not supported
- CONTEXT:  SQL function "date_part" statement 1
   -- test constructors
   select make_date(2013, 7, 15);
    make_date
--- 1443,1450 ----

======================================================================

Actually you may get the same problem with explicitly prepared statements
(certainly, in the last case, you better understand what going on and it is
your choice whether to use or not to use prepared statement).

The fact of failure of 7 regression tests means that autoprepare can really
change behavior of existed program. This is why my suggestion is to switch
off this feature by default.

I would like to see us target something that can be enabled by default.
Even if it only improves performance by 5%, it would be better overall
than a feature that improves performance by 90% but is only used by 1%
of our users.

I have to agree with you here.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#49Andres Freund
andres@anarazel.de
In reply to: Konstantin Knizhnik (#46)
Re: Cached plans and statement generalization

On May 12, 2017 12:50:41 AM PDT, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:

Definitely changing session context (search_path, date/time format,
...)
may cause incorrect behavior of cached statements.
Actually you may get the same problem with explicitly prepared
statements (certainly, in the last case, you better understand what
going on and it is your choice whether to use or not to use prepared
statement).

The fact of failure of 7 regression tests means that autoprepare can
really change behavior of existed program. This is why my suggestion is

to switch off this feature by default.

I can't see us agreeing with this feature unless it actually reliably works, even if disabled by default.

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#50Bruce Momjian
bruce@momjian.us
In reply to: Konstantin Knizhnik (#48)
Re: Cached plans and statement generalization

On Fri, May 12, 2017 at 08:35:26PM +0300, Konstantin Knizhnik wrote:

On 12.05.2017 18:23, Bruce Momjian wrote:

On Fri, May 12, 2017 at 10:50:41AM +0300, Konstantin Knizhnik wrote:

Definitely changing session context (search_path, date/time format, ...) may
cause incorrect behavior of cached statements.

I wonder if we should clear the cache whenever any SET command is
issued.

Well, with autoprepare cache disabled on each set variable, alter system and
any slow utility statement
only one regression test is not passed. And only because of different error
message context:

Great. We have to think of how we would keep this reliable when future
changes happen. Simple is often better because it limits the odds of
breakage.

Actually you may get the same problem with explicitly prepared statements
(certainly, in the last case, you better understand what going on and it is
your choice whether to use or not to use prepared statement).

The fact of failure of 7 regression tests means that autoprepare can really
change behavior of existed program. This is why my suggestion is to switch
off this feature by default.

I would like to see us target something that can be enabled by default.
Even if it only improves performance by 5%, it would be better overall
than a feature that improves performance by 90% but is only used by 1%
of our users.

I have to agree with you here.

Good. I know there are database systems that will try to get the most
performance possible no matter how complex it is for users to configure
or to maintain, but that isn't us.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#51Robert Haas
robertmhaas@gmail.com
In reply to: Konstantin Knizhnik (#35)
Re: Cached plans and statement generalization

On Wed, May 10, 2017 at 12:11 PM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Robert, can you please explain why using TRY/CATCH is not safe here:

This is definitely not a safe way of using TRY/CATCH.

This has been discussed many, many times on this mailing list before,
and I don't really want to go over it again here. We really need a
README or some documentation about this so that we don't have to keep
answering this same question again and again.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#52Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Robert Haas (#51)
Re: Cached plans and statement generalization

On 15.05.2017 18:31, Robert Haas wrote:

On Wed, May 10, 2017 at 12:11 PM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Robert, can you please explain why using TRY/CATCH is not safe here:

This is definitely not a safe way of using TRY/CATCH.

This has been discussed many, many times on this mailing list before,
and I don't really want to go over it again here. We really need a
README or some documentation about this so that we don't have to keep
answering this same question again and again.

First of all I want to notice that new version of my patch is not using
PG_TRY/PG_CATCH.
But I still want to clarify for myself whats wrong with this constructions.
I searched both hackers mailing list archive and world-wide using google
but failed to find any references except of
sharing non-volatilie variables between try and catch blocks.
Can you please point me at the thread where this problem was discussed
or just explain in few words the source of the problem?

From my own experience I found out that PG_TRY/PG_CATCH mechanism is
not providing proper cleanup (unlike C++ exceptions).
If there are opened relations, catalog cache entries,... then throwing
error will not release them.
It will cause no problems if error is handled in PostgresMain which
aborts current transaction and releases all resources in any case.
But if I want to ignore this error and continue query execution, then
warnings about resources leaks can be reported.
Is it want you mean by unsafety of PG_TRY/PG_CATCH constructions?

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#53Andres Freund
andres@anarazel.de
In reply to: Konstantin Knizhnik (#52)
Re: Cached plans and statement generalization

On 2017-05-18 11:57:57 +0300, Konstantin Knizhnik wrote:

From my own experience I found out that PG_TRY/PG_CATCH mechanism is not
providing proper cleanup (unlike C++ exceptions).

Right, simply because there's no portable way to transparently do so.
Would be possible on elf glibc platforms, but ...

If there are opened relations, catalog cache entries,... then throwing error
will not release them.
It will cause no problems if error is handled in PostgresMain which aborts
current transaction and releases all resources in any case.
But if I want to ignore this error and continue query execution, then
warnings about resources leaks can be reported.
Is it want you mean by unsafety of PG_TRY/PG_CATCH constructions?

There's worse than just leaking resources. Everything touching the
database might cause persistent corruption if you don't roll back.
Consider an insert that failed with a foreign key exception, done from
some function. If you ignore that error, the row will still be visible,
but the foreign key will be violated. If you want to continue after a
PG_CATCH you have to use a subtransaction/savepoint for the PG_TRY
contents, like several PLs do.

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#54Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#35)
1 attachment(s)
Re: Cached plans and statement generalization

On 10.05.2017 19:11, Konstantin Knizhnik wrote:

Based on the Robert's feedback and Tom's proposal I have implemented
two new versions of autoprepare patch.

First version is just refactoring of my original implementation: I
have extracted common code into prepare_cached_plan and
exec_prepared_plan
function to avoid code duplication. Also I rewrote assignment of
values to parameters. Now types of parameters are inferred from types
of literals, so there may be several
prepared plans which are different only by types of parameters. Due to
the problem with type coercion for parameters, I have to catch errors
in parse_analyze_varparams.

Attached please find rebased version of the autoprepare patch based on
Tom's proposal (perform analyze for tree with constant literals and then
replace them with parameters).
Also I submitted this patch for the Autum commitfest.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare.patchtext/x-patch; name=autoprepare.patchDownload
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 95c1d3e..0b0642b 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3710,6 +3710,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 75c2d9a..abab8c3 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -42,11 +42,13 @@
 #include "catalog/pg_type.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
+#include "commands/defrem.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
 #include "nodes/print.h"
+#include "nodes/nodeFuncs.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
 #include "pg_trace.h"
@@ -73,6 +75,7 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -188,7 +191,8 @@ static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void SigHupHandler(SIGNAL_ARGS);
 static void log_disconnections(int code, Datum arg);
-
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -962,6 +966,14 @@ exec_simple_query(const char *query_string)
 	isTopLevel = (list_length(parsetree_list) == 1);
 
 	/*
+	 * Try to find cached plan
+	 */
+	if (isTopLevel && autoprepare_threshold != 0 && exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1845,9 +1857,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1859,17 +1890,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -1932,7 +1952,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4499,3 +4519,782 @@ log_disconnections(int code, Datum arg)
 					port->user_name, port->database_name, port->remote_host,
 				  port->remote_port[0] ? " port=" : "", port->remote_port)));
 }
+
+/*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+static void* plan_cache_keycopy_fn(void *dest, const void *src, Size keysize)
+{
+	plan_cache_entry* dst_entry = (plan_cache_entry*)dest;
+	plan_cache_entry* src_entry = (plan_cache_entry*)src;
+	dst_entry->parse_tree = copyObject(src_entry->parse_tree);
+	dst_entry->param_types = palloc(src_entry->n_params*sizeof(Oid));
+	dst_entry->n_params = src_entry->n_params;
+	memcpy(dst_entry->param_types, src_entry->param_types, src_entry->n_params*sizeof(Oid));
+	dst_entry->hash = src_entry->hash;
+	return dest;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Callback for raw_expression_tree_walker dropping parse tree
+ */
+static bool drop_tree_node(Node* node, void* context)
+{
+	if (node) {
+		raw_expression_tree_walker(node, drop_tree_node, NULL);
+		pfree(node);
+	}
+	return false;
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		info.keycopy = plan_cache_keycopy_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		if (++autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			Node* dropped_tree = victim->parse_tree;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				DropCachedPlan(victim->plan);
+			}
+			pfree(victim->param_types);
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			raw_expression_tree_walker(dropped_tree, drop_tree_node, NULL);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed more than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || entry->exec_count++ < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		plan_cache_hash = 0;
+	}
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 1e941fb..8735650 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -684,10 +684,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventTransactionChain(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -960,6 +962,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 8191216..a3e6b9d 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -111,6 +111,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -644,6 +645,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92e1d63..ee673b4 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -472,6 +472,10 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -1958,6 +1962,28 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index b6c9b48..1388822 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -73,8 +73,11 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 												   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+												   void *context);
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 											  void *context);
 
+
 #endif   /* NODEFUNCS_H */
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 12ff458..50dfad6 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif   /* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 87d0741..1215ac8 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -251,6 +251,9 @@ extern int	client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
#55Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Konstantin Knizhnik (#54)
Re: Cached plans and statement generalization

On Fri, May 26, 2017 at 3:54 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Attached please find rebased version of the autoprepare patch based on Tom's
proposal (perform analyze for tree with constant literals and then replace
them with parameters).
Also I submitted this patch for the Autum commitfest.

The patch didn't survive the Summer bitrotfest. Could you please rebase it?

--
Thomas Munro
http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#56Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#55)
1 attachment(s)
Re: Cached plans and statement generalization

On 09.09.2017 06:35, Thomas Munro wrote:

On Fri, May 26, 2017 at 3:54 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Attached please find rebased version of the autoprepare patch based on Tom's
proposal (perform analyze for tree with constant literals and then replace
them with parameters).
Also I submitted this patch for the Autum commitfest.

The patch didn't survive the Summer bitrotfest. Could you please rebase it?

Attached please find rebased version of the patch.
There are the updated performance results (pgbench -s 100 -c 1):

protocol (-M)
read-write
read-only (-S)
simple
3327
19325
extended
2256
16908
prepared
6145
39326
simple+autoprepare
4950
34841

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-3.patchtext/x-patch; name=autoprepare-3.patchDownload
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index e3eb0c5..17f3dfd 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3688,6 +3688,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index c10d891..d41c0e5 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -42,11 +42,13 @@
 #include "catalog/pg_type.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
+#include "commands/defrem.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
 #include "nodes/print.h"
+#include "nodes/nodeFuncs.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
 #include "pg_trace.h"
@@ -75,6 +77,7 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -182,7 +185,8 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
-
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -956,6 +960,14 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan
+	 */
+	if (autoprepare_threshold != 0 && exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1852,9 +1864,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1866,17 +1897,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -1939,7 +1959,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4534,3 +4554,782 @@ log_disconnections(int code, Datum arg)
 					port->user_name, port->database_name, port->remote_host,
 					port->remote_port[0] ? " port=" : "", port->remote_port)));
 }
+
+/*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+static void* plan_cache_keycopy_fn(void *dest, const void *src, Size keysize)
+{
+	plan_cache_entry* dst_entry = (plan_cache_entry*)dest;
+	plan_cache_entry* src_entry = (plan_cache_entry*)src;
+	dst_entry->parse_tree = copyObject(src_entry->parse_tree);
+	dst_entry->param_types = palloc(src_entry->n_params*sizeof(Oid));
+	dst_entry->n_params = src_entry->n_params;
+	memcpy(dst_entry->param_types, src_entry->param_types, src_entry->n_params*sizeof(Oid));
+	dst_entry->hash = src_entry->hash;
+	return dest;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Callback for raw_expression_tree_walker dropping parse tree
+ */
+static bool drop_tree_node(Node* node, void* context)
+{
+	if (node) {
+		raw_expression_tree_walker(node, drop_tree_node, NULL);
+		pfree(node);
+	}
+	return false;
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		info.keycopy = plan_cache_keycopy_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		if (++autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			Node* dropped_tree = victim->parse_tree;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				DropCachedPlan(victim->plan);
+			}
+			pfree(victim->param_types);
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			raw_expression_tree_walker(dropped_tree, drop_tree_node, NULL);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed more than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || entry->exec_count++ < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false, NULL);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		plan_cache_hash = 0;
+	}
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 775477c..94ee570 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -684,10 +684,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventTransactionChain(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -960,6 +962,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index d0e54b8..af641f0 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -111,6 +111,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -644,6 +645,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 246fea8..a991c6f 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -472,6 +472,10 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -1958,6 +1962,28 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 3366983..263584d 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -73,6 +73,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 6abfe7b..d25e744 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index c1870d2..cd6a645 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -251,6 +251,9 @@ extern int	client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
#57Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#56)
1 attachment(s)
Re: Cached plans and statement generalization

On 11.09.2017 12:24, Konstantin Knizhnik wrote:

Attached please find rebased version of the patch.
There are the updated performance results (pgbench -s 100 -c 1):

protocol (-M)
read-write
read-only (-S)
simple
3327
19325
extended
2256
16908
prepared
6145
39326
simple+autoprepare
4950
34841

One more patch passing all regression tests with autoprepare_threshold=1.
I still do not think that it should be switch on by default...

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-4.patchtext/x-patch; name=autoprepare-4.patchDownload
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index e3eb0c5..17f3dfd 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3688,6 +3688,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index c10d891..e0d62ef 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -42,11 +42,13 @@
 #include "catalog/pg_type.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
+#include "commands/defrem.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
 #include "nodes/print.h"
+#include "nodes/nodeFuncs.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
 #include "pg_trace.h"
@@ -75,6 +77,7 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -182,7 +185,8 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
-
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 
 /* ----------------------------------------------------------------
  *		routines to obtain user input
@@ -956,6 +960,16 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1852,9 +1866,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1866,17 +1899,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -1939,7 +1961,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4534,3 +4556,782 @@ log_disconnections(int code, Datum arg)
 					port->user_name, port->database_name, port->remote_host,
 					port->remote_port[0] ? " port=" : "", port->remote_port)));
 }
+
+/*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+static void* plan_cache_keycopy_fn(void *dest, const void *src, Size keysize)
+{
+	plan_cache_entry* dst_entry = (plan_cache_entry*)dest;
+	plan_cache_entry* src_entry = (plan_cache_entry*)src;
+	dst_entry->parse_tree = copyObject(src_entry->parse_tree);
+	dst_entry->param_types = palloc(src_entry->n_params*sizeof(Oid));
+	dst_entry->n_params = src_entry->n_params;
+	memcpy(dst_entry->param_types, src_entry->param_types, src_entry->n_params*sizeof(Oid));
+	dst_entry->hash = src_entry->hash;
+	return dest;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Callback for raw_expression_tree_walker dropping parse tree
+ */
+static bool drop_tree_node(Node* node, void* context)
+{
+	if (node) {
+		raw_expression_tree_walker(node, drop_tree_node, NULL);
+		pfree(node);
+	}
+	return false;
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		info.keycopy = plan_cache_keycopy_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		if (++autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			Node* dropped_tree = victim->parse_tree;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				DropCachedPlan(victim->plan);
+			}
+			pfree(victim->param_types);
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			raw_expression_tree_walker(dropped_tree, drop_tree_node, NULL);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed more than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || entry->exec_count++ < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false, NULL);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		plan_cache_hash = 0;
+	}
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 775477c..94ee570 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -684,10 +684,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventTransactionChain(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -960,6 +962,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index d0e54b8..af641f0 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -111,6 +111,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -644,6 +645,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 246fea8..a991c6f 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -472,6 +472,10 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -1958,6 +1962,28 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 3366983..263584d 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -73,6 +73,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 6abfe7b..d25e744 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index c1870d2..cd6a645 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -251,6 +251,9 @@ extern int	client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000..b6101c7
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
#58Michael Paquier
michael.paquier@gmail.com
In reply to: Konstantin Knizhnik (#57)
Re: [HACKERS] Cached plans and statement generalization

On Wed, Sep 13, 2017 at 2:11 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

One more patch passing all regression tests with autoprepare_threshold=1.
I still do not think that it should be switch on by default...

This patch does not apply, and did not get any reviews. So I am moving
it to next CF with waiting on author as status. Please provide a
rebased version. Tsunakawa-san, you are listed as a reviewer of this
patch. If you are not planning to look at it anymore, you may want to
remove your name from the related CF entry
https://commitfest.postgresql.org/16/1150/.
--
Michael

#59Tsunakawa, Takayuki
tsunakawa.takay@jp.fujitsu.com
In reply to: Michael Paquier (#58)
RE: [HACKERS] Cached plans and statement generalization

From: Michael Paquier [mailto:michael.paquier@gmail.com]

This patch does not apply, and did not get any reviews. So I am moving it
to next CF with waiting on author as status. Please provide a rebased version.
Tsunakawa-san, you are listed as a reviewer of this patch. If you are not
planning to look at it anymore, you may want to remove your name from the
related CF entry https://commitfest.postgresql.org/16/1150/.

Sorry, but I'm planning to review this next month because this proposal could alleviate some problem we faced.

Regards
Takayuki Tsunakawa

#60Michael Paquier
michael.paquier@gmail.com
In reply to: Tsunakawa, Takayuki (#59)
Re: [HACKERS] Cached plans and statement generalization

On Thu, Nov 30, 2017 at 11:07 AM, Tsunakawa, Takayuki
<tsunakawa.takay@jp.fujitsu.com> wrote:

From: Michael Paquier [mailto:michael.paquier@gmail.com]

This patch does not apply, and did not get any reviews. So I am moving it
to next CF with waiting on author as status. Please provide a rebased version.
Tsunakawa-san, you are listed as a reviewer of this patch. If you are not
planning to look at it anymore, you may want to remove your name from the
related CF entry https://commitfest.postgresql.org/16/1150/.

Sorry, but I'm planning to review this next month because this proposal could alleviate some problem we faced.

No problem. Thanks for the update.
--
Michael

#61Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Michael Paquier (#58)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 30.11.2017 04:59, Michael Paquier wrote:

On Wed, Sep 13, 2017 at 2:11 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

One more patch passing all regression tests with autoprepare_threshold=1.
I still do not think that it should be switch on by default...

This patch does not apply, and did not get any reviews. So I am moving
it to next CF with waiting on author as status. Please provide a
rebased version. Tsunakawa-san, you are listed as a reviewer of this
patch. If you are not planning to look at it anymore, you may want to
remove your name from the related CF entry
https://commitfest.postgresql.org/16/1150/.

Updated version of the patch is attached.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-5.patchtext/x-patch; name=autoprepare-5.patchDownload
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index c2a93b2..0e6cc89 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3688,6 +3688,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 1ae9ac2..7fa8310 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -42,11 +42,13 @@
 #include "catalog/pg_type.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
+#include "commands/defrem.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
 #include "nodes/print.h"
+#include "nodes/nodeFuncs.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
 #include "pg_trace.h"
@@ -75,6 +77,7 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -191,10 +194,11 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
-
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -967,6 +971,16 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1865,9 +1879,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1879,17 +1912,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -1952,7 +1974,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4572,6 +4594,786 @@ log_disconnections(int code, Datum arg)
 }
 
 /*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+static void* plan_cache_keycopy_fn(void *dest, const void *src, Size keysize)
+{
+	plan_cache_entry* dst_entry = (plan_cache_entry*)dest;
+	plan_cache_entry* src_entry = (plan_cache_entry*)src;
+	dst_entry->parse_tree = copyObject(src_entry->parse_tree);
+	dst_entry->param_types = palloc(src_entry->n_params*sizeof(Oid));
+	dst_entry->n_params = src_entry->n_params;
+	memcpy(dst_entry->param_types, src_entry->param_types, src_entry->n_params*sizeof(Oid));
+	dst_entry->hash = src_entry->hash;
+	return dest;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Callback for raw_expression_tree_walker dropping parse tree
+ */
+static bool drop_tree_node(Node* node, void* context)
+{
+	if (node) {
+		raw_expression_tree_walker(node, drop_tree_node, NULL);
+		pfree(node);
+	}
+	return false;
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		info.keycopy = plan_cache_keycopy_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		if (++autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			Node* dropped_tree = victim->parse_tree;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				DropCachedPlan(victim->plan);
+			}
+			pfree(victim->param_types);
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			raw_expression_tree_walker(dropped_tree, drop_tree_node, NULL);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed more than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || entry->exec_count++ < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(PortalGetHeapMemory(portal));
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false, NULL);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		plan_cache_hash = 0;
+	}
+}
+
+
+/*
  * Start statement timeout timer, if enabled.
  *
  * If there's already a timeout running, don't restart the timer.  That
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 4da1f8f..a64d2b1 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -688,10 +688,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventTransactionChain(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -964,6 +966,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 0e61b4b..4980da7 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -111,6 +111,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -644,6 +645,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 6dcd738..0481c24 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -473,6 +473,10 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -1968,6 +1972,28 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 3366983..263584d 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -73,6 +73,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 6abfe7b..d25e744 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 41335b8..85964e0 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -253,6 +253,9 @@ extern int	client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000..b6101c7
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
#62Stephen Frost
sfrost@snowman.net
In reply to: Konstantin Knizhnik (#61)
Re: [HACKERS] Cached plans and statement generalization

Greetings,

* Konstantin Knizhnik (k.knizhnik@postgrespro.ru) wrote:

On 30.11.2017 04:59, Michael Paquier wrote:

On Wed, Sep 13, 2017 at 2:11 AM, Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

One more patch passing all regression tests with autoprepare_threshold=1.
I still do not think that it should be switch on by default...

This patch does not apply, and did not get any reviews. So I am moving
it to next CF with waiting on author as status. Please provide a
rebased version. Tsunakawa-san, you are listed as a reviewer of this
patch. If you are not planning to look at it anymore, you may want to
remove your name from the related CF entry
https://commitfest.postgresql.org/16/1150/.

Updated version of the patch is attached.

This patch appears to apply with just a bit of fuzz and make check
passes, so I'm not sure why this is currently marked as 'Waiting for
author'.

I've updated it to be 'Needs review'. If that's incorrect, feel free to
change it back with an explanation.

Thanks!

Stephen

#63Thomas Munro
thomas.munro@enterprisedb.com
In reply to: Stephen Frost (#62)
Re: [HACKERS] Cached plans and statement generalization

On Sun, Jan 7, 2018 at 11:51 AM, Stephen Frost <sfrost@snowman.net> wrote:

* Konstantin Knizhnik (k.knizhnik@postgrespro.ru) wrote:

Updated version of the patch is attached.

This patch appears to apply with just a bit of fuzz and make check
passes, so I'm not sure why this is currently marked as 'Waiting for
author'.

I've updated it to be 'Needs review'. If that's incorrect, feel free to
change it back with an explanation.

Hi Konstantin,

/home/travis/build/postgresql-cfbot/postgresql/src/backend/tcop/postgres.c:5249:
undefined reference to `PortalGetHeapMemory'

That's because commit 0f7c49e85518dd846ccd0a044d49a922b9132983 killed
PortalGetHeapMemory. Looks like it needs to be replaced with
portal->portalContext.

--
Thomas Munro
http://www.enterprisedb.com

#64Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#63)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 12.01.2018 03:40, Thomas Munro wrote:

On Sun, Jan 7, 2018 at 11:51 AM, Stephen Frost <sfrost@snowman.net> wrote:

* Konstantin Knizhnik (k.knizhnik@postgrespro.ru) wrote:

Updated version of the patch is attached.

This patch appears to apply with just a bit of fuzz and make check
passes, so I'm not sure why this is currently marked as 'Waiting for
author'.

I've updated it to be 'Needs review'. If that's incorrect, feel free to
change it back with an explanation.

Hi Konstantin,

/home/travis/build/postgresql-cfbot/postgresql/src/backend/tcop/postgres.c:5249:
undefined reference to `PortalGetHeapMemory'

That's because commit 0f7c49e85518dd846ccd0a044d49a922b9132983 killed
PortalGetHeapMemory. Looks like it needs to be replaced with
portal->portalContext.

Hi  Thomas,

Thank you very much for reporting the problem.
Rebased version of the patch is attached.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-6.patchtext/x-patch; name=autoprepare-6.patchDownload
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 6c76c41..8922950 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3688,6 +3688,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index ddc3ec8..cb1f20e 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -42,11 +42,13 @@
 #include "catalog/pg_type.h"
 #include "commands/async.h"
 #include "commands/prepare.h"
+#include "commands/defrem.h"
 #include "libpq/libpq.h"
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
 #include "nodes/print.h"
+#include "nodes/nodeFuncs.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
 #include "pg_trace.h"
@@ -75,6 +77,7 @@
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -191,10 +194,11 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
-
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -967,6 +971,16 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1866,9 +1880,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1880,17 +1913,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -1953,7 +1975,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4576,6 +4598,786 @@ log_disconnections(int code, Datum arg)
 }
 
 /*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+static void* plan_cache_keycopy_fn(void *dest, const void *src, Size keysize)
+{
+	plan_cache_entry* dst_entry = (plan_cache_entry*)dest;
+	plan_cache_entry* src_entry = (plan_cache_entry*)src;
+	dst_entry->parse_tree = copyObject(src_entry->parse_tree);
+	dst_entry->param_types = palloc(src_entry->n_params*sizeof(Oid));
+	dst_entry->n_params = src_entry->n_params;
+	memcpy(dst_entry->param_types, src_entry->param_types, src_entry->n_params*sizeof(Oid));
+	dst_entry->hash = src_entry->hash;
+	return dest;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Callback for raw_expression_tree_walker dropping parse tree
+ */
+static bool drop_tree_node(Node* node, void* context)
+{
+	if (node) {
+		raw_expression_tree_walker(node, drop_tree_node, NULL);
+		pfree(node);
+	}
+	return false;
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		info.keycopy = plan_cache_keycopy_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		if (++autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			Node* dropped_tree = victim->parse_tree;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				DropCachedPlan(victim->plan);
+			}
+			pfree(victim->param_types);
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			raw_expression_tree_walker(dropped_tree, drop_tree_node, NULL);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed more than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || entry->exec_count++ < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(portal->portalContext);
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false, NULL);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		plan_cache_hash = 0;
+	}
+}
+
+
+/*
  * Start statement timeout timer, if enabled.
  *
  * If there's already a timeout running, don't restart the timer.  That
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index ec98a61..270d0dc 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -688,10 +688,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventTransactionChain(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -964,6 +966,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 955f5c7..e692283 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -111,6 +111,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -644,6 +645,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 72f6be3..3f26542 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -473,6 +473,10 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -1985,6 +1989,28 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 849f34d..ba97744 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -73,6 +73,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 507d89c..5770c42 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 77daa5a..def6c27 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -253,6 +253,9 @@ extern PGDLLIMPORT int client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000..b6101c7
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
#65David Steele
david@pgmasters.net
In reply to: Konstantin Knizhnik (#64)
Re: Re: [HACKERS] Cached plans and statement generalization

Hi Konstantin,

On 1/12/18 7:53 AM, Konstantin Knizhnik wrote:

On 12.01.2018 03:40, Thomas Munro wrote:

On Sun, Jan 7, 2018 at 11:51 AM, Stephen Frost <sfrost@snowman.net>
wrote:

* Konstantin Knizhnik (k.knizhnik@postgrespro.ru) wrote:

Updated version of the patch is attached.

This patch appears to apply with just a bit of fuzz and make check
passes, so I'm not sure why this is currently marked as 'Waiting for
author'.

I've updated it to be 'Needs review'.  If that's incorrect, feel free to
change it back with an explanation.

Hi Konstantin,

/home/travis/build/postgresql-cfbot/postgresql/src/backend/tcop/postgres.c:5249:

undefined reference to `PortalGetHeapMemory'

That's because commit 0f7c49e85518dd846ccd0a044d49a922b9132983 killed
PortalGetHeapMemory.  Looks like it needs to be replaced with
portal->portalContext.

Hi  Thomas,

Thank you very much for reporting the problem.
Rebased version of the patch is attached.

This patch has received no review or comments since last May and appears
too complex and invasive for the final CF of PG11.

I don't think it makes sense to keep pushing a patch through CFs when it
is not getting reviewed. I'm planning to mark this as Returned with
Feedback unless there are solid arguments to the contrary.

Thanks,
--
-David
david@pgmasters.net

#66David Steele
david@pgmasters.net
In reply to: David Steele (#65)
Re: Re: Re: [HACKERS] Cached plans and statement generalization

On 3/2/18 9:26 AM, David Steele wrote:

On 1/12/18 7:53 AM, Konstantin Knizhnik wrote:

On 12.01.2018 03:40, Thomas Munro wrote:

On Sun, Jan 7, 2018 at 11:51 AM, Stephen Frost <sfrost@snowman.net>
wrote:

* Konstantin Knizhnik (k.knizhnik@postgrespro.ru) wrote:

Updated version of the patch is attached.

This patch appears to apply with just a bit of fuzz and make check
passes, so I'm not sure why this is currently marked as 'Waiting for
author'.

I've updated it to be 'Needs review'.  If that's incorrect, feel free to
change it back with an explanation.

Hi Konstantin,

/home/travis/build/postgresql-cfbot/postgresql/src/backend/tcop/postgres.c:5249:

undefined reference to `PortalGetHeapMemory'

That's because commit 0f7c49e85518dd846ccd0a044d49a922b9132983 killed
PortalGetHeapMemory.  Looks like it needs to be replaced with
portal->portalContext.

Hi  Thomas,

Thank you very much for reporting the problem.
Rebased version of the patch is attached.

This patch has received no review or comments since last May and appears
too complex and invasive for the final CF of PG11.

I don't think it makes sense to keep pushing a patch through CFs when it
is not getting reviewed. I'm planning to mark this as Returned with
Feedback unless there are solid arguments to the contrary.

Marked as Returned with Feedback.

Regards,
--
-David
david@pgmasters.net

#67Yamaji, Ryo
yamaji.ryo@jp.fujitsu.com
In reply to: Konstantin Knizhnik (#64)
RE: [HACKERS] Cached plans and statement generalization

-----Original Message-----
From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]
Sent: Friday, January 12, 2018 9:53 PM
To: Thomas Munro <thomas.munro@enterprisedb.com>; Stephen Frost
<sfrost@snowman.net>
Cc: Michael Paquier <michael.paquier@gmail.com>; PostgreSQL mailing
lists <pgsql-hackers@postgresql.org>; Tsunakawa, Takayuki/綱川 貴之
<tsunakawa.takay@jp.fujitsu.com>
Subject: Re: [HACKERS] Cached plans and statement generalization

Thank you very much for reporting the problem.
Rebased version of the patch is attached.

Hi Konstantin.

I think that this patch excel very much. Because the customer of our
company has the demand that migrates from other DB to PostgreSQL, and
the problem to have to modify the application program to do prepare in
that case occurs. It is possible to solve it by the problem's using this
patch. I want to be helping this patch to be committed. Will you participate
in the following CF?

To review this patch, I verified it. The verified environment is
PostgreSQL 11beta2. It is necessary to add "executor/spi.h" and "jit/jit.h"
to postgres.c of the patch by the updating of PostgreSQL. Please rebase.

1. I confirmed the influence on the performance by having applied this patch.
The result showed the tendency similar to Konstantin.
-s:100 -c:8 -t: 10000 read-only
simple: 20251 TPS
prepare: 29502 TPS
simple(autoprepare): 28001 TPS

2. I confirmed the influence on the memory utilization by the length of query that did
autoprepare. Short queries have 1 constant. Long queries have 100 constants.
This result was shown that preparing long query used the memory more.
before prepare:plan cache context: 1032 used
prepare 10 short query statement:plan cache context: 15664 used
prepare 10 long query statement:plan cache context: 558032 used

In this patch, the maximum number of query that can do prepare can be set to autoprepare_limit.
However, is it good in this? I think that I can assume the scene in the following.
- Application side user: To elicit the performance, they want to specify the number of prepared
query.
- Operation side user: To prevent the memory from overflowing, they want to set the maximum value
of the memory utilization.
Therefore, I propose to add the parameter to specify the maximum memory utilization.

3. I confirmed the transition of the amount of the memory when it tried to prepare query
of the number that exceeded the value specified for autoprepare_limit.
[autoprepare_limit=1 and execute 10 different queries]
plan cache context: 1032 used
plan cache context: 39832 used
plan cache context: 78552 used
plan cache context: 117272 used
plan cache context: 155952 used
plan cache context: 194632 used
plan cache context: 233312 used
plan cache context: 272032 used
plan cache context: 310712 used
plan cache context: 349392 used
plan cache context: 388072 used

I feel the doubt in an increase of the memory utilization when I execute a lot of
query though cached query is one (autoprepare_limit=1).
This behavior is correct?

Best regards, Yamaji

#68Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Yamaji, Ryo (#67)
Re: [HACKERS] Cached plans and statement generalization

Hi Yamaji,

On 31.07.2018 12:12, Yamaji, Ryo wrote:

-----Original Message-----
From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]
Sent: Friday, January 12, 2018 9:53 PM
To: Thomas Munro <thomas.munro@enterprisedb.com>; Stephen Frost
<sfrost@snowman.net>
Cc: Michael Paquier <michael.paquier@gmail.com>; PostgreSQL mailing
lists <pgsql-hackers@postgresql.org>; Tsunakawa, Takayuki/綱川 貴之
<tsunakawa.takay@jp.fujitsu.com>
Subject: Re: [HACKERS] Cached plans and statement generalization

Thank you very much for reporting the problem.
Rebased version of the patch is attached.

Hi Konstantin.

I think that this patch excel very much. Because the customer of our
company has the demand that migrates from other DB to PostgreSQL, and
the problem to have to modify the application program to do prepare in
that case occurs. It is possible to solve it by the problem's using this
patch. I want to be helping this patch to be committed. Will you participate
in the following CF?

This patch will be included in next release of PgPro EE.
Concerning next commit fest - I am not sure.
At previous commitfest it was returned with feedback that it "has
received no review or comments since last May".
May be your review will help to change this situation.

To review this patch, I verified it. The verified environment is
PostgreSQL 11beta2. It is necessary to add "executor/spi.h" and "jit/jit.h"
to postgres.c of the patch by the updating of PostgreSQL. Please rebase.

1. I confirmed the influence on the performance by having applied this patch.
The result showed the tendency similar to Konstantin.
-s:100 -c:8 -t: 10000 read-only
simple: 20251 TPS
prepare: 29502 TPS
simple(autoprepare): 28001 TPS

2. I confirmed the influence on the memory utilization by the length of query that did
autoprepare. Short queries have 1 constant. Long queries have 100 constants.
This result was shown that preparing long query used the memory more.
before prepare:plan cache context: 1032 used
prepare 10 short query statement:plan cache context: 15664 used
prepare 10 long query statement:plan cache context: 558032 used

In this patch, the maximum number of query that can do prepare can be set to autoprepare_limit.
However, is it good in this? I think that I can assume the scene in the following.
- Application side user: To elicit the performance, they want to specify the number of prepared
query.
- Operation side user: To prevent the memory from overflowing, they want to set the maximum value
of the memory utilization.
Therefore, I propose to add the parameter to specify the maximum memory utilization.

I agree it may be more useful to limit amount of memory used by prepare
queries, rather than number of prepared statements.
But it is just more difficult to calculate and maintain (I am not sure
that just looking at CacheMemoryContext is enough for it).
Also, if working set of queries (frequently repeated queries) doesn't
fir in memory, then autoprepare will be almost useless (because with
high probability
prepared query will be thrown away from the cache before it can be
reused). So limiting queries from "application side" seems to be more
practical.

3. I confirmed the transition of the amount of the memory when it tried to prepare query
of the number that exceeded the value specified for autoprepare_limit.
[autoprepare_limit=1 and execute 10 different queries]
plan cache context: 1032 used
plan cache context: 39832 used
plan cache context: 78552 used
plan cache context: 117272 used
plan cache context: 155952 used
plan cache context: 194632 used
plan cache context: 233312 used
plan cache context: 272032 used
plan cache context: 310712 used
plan cache context: 349392 used
plan cache context: 388072 used

I feel the doubt in an increase of the memory utilization when I execute a lot of
query though cached query is one (autoprepare_limit=1).
This behavior is correct?

I will check it.

#69Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#68)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 01.08.2018 00:30, Konstantin Knizhnik wrote:

Hi Yamaji,

On 31.07.2018 12:12, Yamaji, Ryo wrote:

-----Original Message-----
From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]
Sent: Friday, January 12, 2018 9:53 PM
To: Thomas Munro <thomas.munro@enterprisedb.com>; Stephen Frost
<sfrost@snowman.net>
Cc: Michael Paquier <michael.paquier@gmail.com>; PostgreSQL mailing
lists <pgsql-hackers@postgresql.org>; Tsunakawa, Takayuki/綱川 貴之
<tsunakawa.takay@jp.fujitsu.com>
Subject: Re: [HACKERS] Cached plans and statement generalization

Thank you very much for reporting the problem.
Rebased version of the patch is attached.

Hi Konstantin.

I think that this patch excel very much. Because the customer of our
company has the demand that migrates from other DB to PostgreSQL, and
the problem to have to modify the application program to do prepare in
that case occurs. It is possible to solve it by the problem's using this
patch. I want to be helping this patch to be committed. Will you
participate
in the following CF?

This patch will be included in next release of PgPro EE.
Concerning next commit fest - I am not sure.
At previous commitfest it was returned with feedback that it "has
received no review or comments since last May".
May be your review will help to change this situation.

To review this patch, I verified it. The verified environment is
PostgreSQL 11beta2. It is necessary to add "executor/spi.h" and
"jit/jit.h"
to postgres.c of the patch by the updating of PostgreSQL. Please rebase.

New version of the patch is attached.

Attachments:

autoprepare-9.patchtext/x-patch; name=autoprepare-9.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..b3309bd
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres tries to generalize executed statements
+    and build parameterized plan for them. Speed of execution of
+    autoprepared statements is almost the same as of explicitly
+    prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_tthreshold</varname>.
+    This variable specified minimal number of times the statement should be
+    executed before it is autoprepared. Please notice that, despite to the
+    value of this parameter, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    If number of different statements issued by application is large enough,
+    then autopreparing all of them can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). To prevent growth of backend's memory because of
+    autoprepared cache, it is possible to limit number of autoprepared statements
+    by setting <varname>autoprepare_limit</varname> GUC variable. LRU strategy will be used
+    to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals) and
+    query execution counter.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 3ed9021..fdf8d52 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8226,6 +8226,11 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
      </row>
 
      <row>
+      <entry><link linkend="view-pg-autoprepared-statements"><structname>pg_autoprepared_statements</structname></link></entry>
+      <entry>autoprepared statements</entry>
+     </row>
+
+     <row>
       <entry><link linkend="view-pg-prepared-xacts"><structname>pg_prepared_xacts</structname></link></entry>
       <entry>prepared transactions</entry>
      </row>
@@ -9537,6 +9542,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   </para>
  </sect1>
 
+
+ <sect1 id="view-pg-autoprepared-statements">
+  <title><structname>pg_autoprepared_statements</structname></title>
+
+  <indexterm zone="view-pg-autoprepared-statements">
+   <primary>pg_autoprepared_statements</primary>
+  </indexterm>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view displays
+   all the autoprepared statements that are available in the current
+   session. See <xref linkend="autoprepare"/> for more information about autoprepared
+   statements.
+  </para>
+
+  <table>
+   <title><structname>pg_autoprepared_statements</structname> Columns</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Name</entry>
+      <entry>Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+    <tbody>
+     <row>
+      <entry><structfield>statement</structfield></entry>
+      <entry><type>text</type></entry>
+      <entry>
+        The query string submitted by the client from which this prepared statement
+        was created. Please notice that literals in this statement are not
+        replaced with prepared statement placeholders.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>parameter_types</structfield></entry>
+      <entry><type>regtype[]</type></entry>
+      <entry>
+       The expected parameter types for the autoprepared statement in the
+       form of an array of <type>regtype</type>. The OID corresponding
+       to an element of this array can be obtained by casting the
+       <type>regtype</type> value to <type>oid</type>.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>exec_count</structfield></entry>
+      <entry><type>int8</type></entry>
+      <entry>
+        Number of times this statement was executed.
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view is read only.
+  </para>
+ </sect1>
+
  <sect1 id="view-pg-prepared-xacts">
   <title><structname>pg_prepared_xacts</structname></title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index f010cd4..43b9457 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 0070603..d9ef137 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 8cd8bf4..fef282a 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -291,6 +291,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
 	l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..f7b3581 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 			   const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -797,7 +796,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index a10014f..0286587 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3713,6 +3713,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f413395..a1c76cd 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -41,6 +41,7 @@
 #include "access/xact.h"
 #include "catalog/pg_type.h"
 #include "commands/async.h"
+#include "commands/defrem.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
 #include "jit/jit.h"
@@ -48,6 +49,7 @@
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
+#include "nodes/nodeFuncs.h"
 #include "nodes/print.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
@@ -71,12 +73,15 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -193,10 +198,11 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
-
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -969,6 +975,16 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1868,9 +1884,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1882,17 +1917,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -1955,7 +1979,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4586,6 +4610,786 @@ log_disconnections(int code, Datum arg)
 }
 
 /*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+static void* plan_cache_keycopy_fn(void *dest, const void *src, Size keysize)
+{
+	plan_cache_entry* dst_entry = (plan_cache_entry*)dest;
+	plan_cache_entry* src_entry = (plan_cache_entry*)src;
+	dst_entry->parse_tree = copyObject(src_entry->parse_tree);
+	dst_entry->param_types = palloc(src_entry->n_params*sizeof(Oid));
+	dst_entry->n_params = src_entry->n_params;
+	memcpy(dst_entry->param_types, src_entry->param_types, src_entry->n_params*sizeof(Oid));
+	dst_entry->hash = src_entry->hash;
+	return dest;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Callback for raw_expression_tree_walker dropping parse tree
+ */
+static bool drop_tree_node(Node* node, void* context)
+{
+	if (node) {
+		raw_expression_tree_walker(node, drop_tree_node, NULL);
+		pfree(node);
+	}
+	return false;
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		info.keycopy = plan_cache_keycopy_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE | HASH_KEYCOPY);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		if (++autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			Node* dropped_tree = victim->parse_tree;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				DropCachedPlan(victim->plan);
+			}
+			pfree(victim->param_types);
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			raw_expression_tree_walker(dropped_tree, drop_tree_node, NULL);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed not less than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || ++entry->exec_count < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(portal->portalContext);
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false, NULL);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		plan_cache_hash = 0;
+	}
+}
+
+
+/*
  * Start statement timeout timer, if enabled.
  *
  * If there's already a timeout running, don't restart the timer.  That
@@ -4623,3 +5427,89 @@ disable_statement_timeout(void)
 		stmt_timeout_active = false;
 	}
 }
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(3, false);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (plan_cache_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		plan_cache_entry *prep_stmt;
+
+		hash_seq_init(&hash_seq, plan_cache_hash);
+		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (prep_stmt->plan != NULL && !prep_stmt->disable_autoprepare)
+			{
+				Datum		values[3];
+				bool		nulls[3];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(prep_stmt->plan->query_string);
+				values[1] = build_regtype_array(prep_stmt->param_types,
+												prep_stmt->n_params);
+				values[2] = Int64GetDatum(prep_stmt->exec_count);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index bdfb66f..63b6436 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -676,10 +676,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -952,6 +954,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 16d86a2..3add869 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -111,6 +111,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -644,6 +645,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index b05fb20..60952e7 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -475,6 +475,10 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2121,6 +2125,28 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 40d54ed..6ea2ef8 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7637,6 +7637,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4013', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8}',
+  proargmodes => '{o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..883c39b 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 849f34d..ba97744 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -73,6 +73,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 507d89c..5770c42 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index f462eab..3227781 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -253,6 +253,9 @@ extern PGDLLIMPORT int client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/autoprepare.out b/src/test/regress/expected/autoprepare.out
new file mode 100644
index 0000000..728dab2
--- /dev/null
+++ b/src/test/regress/expected/autoprepare.out
@@ -0,0 +1,55 @@
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          1
+ select * from pg_autoprepared_statements;               | {}              |          1
+(2 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          2
+ select * from pg_autoprepared_statements;               | {}              |          2
+(2 rows)
+
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000..b6101c7
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index ae0cd25..bdc28fe 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1297,6 +1297,10 @@ mvtest_tvv| SELECT sum(mvtest_tv.totamt) AS grandtot
    FROM mvtest_tv;
 mvtest_tvvmv| SELECT mvtest_tvvm.grandtot
    FROM mvtest_tvvm;
+pg_autoprepared_statements| SELECT p.statement,
+    p.parameter_types,
+    p.exec_count
+   FROM pg_autoprepared_statement() p(statement, parameter_types, exec_count);
 pg_available_extension_versions| SELECT e.name,
     e.version,
     (x.extname IS NOT NULL) AS installed,
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 16f979c..2a92588 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -111,7 +111,7 @@ test: select_views portals_p2 foreign_key cluster dependency guc bitmapops combo
 # NB: temp.sql does a reconnect which transiently uses 2 connections,
 # so keep this parallel group to at most 19 tests
 # ----------
-test: plancache limit plpgsql copy2 temp domain rangefuncs prepare without_oid conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
+test: plancache limit plpgsql copy2 temp domain rangefuncs prepare autoprepare without_oid conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 42632be..02a7237 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -170,6 +170,7 @@ test: temp
 test: domain
 test: rangefuncs
 test: prepare
+test: autoprepare
 test: without_oid
 test: conversion
 test: truncate
diff --git a/src/test/regress/sql/autoprepare.sql b/src/test/regress/sql/autoprepare.sql
new file mode 100644
index 0000000..98e4f7b
--- /dev/null
+++ b/src/test/regress/sql/autoprepare.sql
@@ -0,0 +1,11 @@
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
#70Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Yamaji, Ryo (#67)
Re: [HACKERS] Cached plans and statement generalization

On 31.07.2018 12:12, Yamaji, Ryo wrote:

3. I confirmed the transition of the amount of the memory when it tried to prepare query
of the number that exceeded the value specified for autoprepare_limit.
[autoprepare_limit=1 and execute 10 different queries]
plan cache context: 1032 used
plan cache context: 39832 used
plan cache context: 78552 used
plan cache context: 117272 used
plan cache context: 155952 used
plan cache context: 194632 used
plan cache context: 233312 used
plan cache context: 272032 used
plan cache context: 310712 used
plan cache context: 349392 used
plan cache context: 388072 used

I feel the doubt in an increase of the memory utilization when I execute a lot of
query though cached query is one (autoprepare_limit=1).
This behavior is correct?

I failed to reproduce the problem.
I used the following non-default configuration parameters:

autoprepare_limit=1
autoprepare_threshold=1

create dummy database:

create table foo(x integer primary key, y integer);
insert into foo values (generate_series(1,10000), 0);

and run different queries,  like:

postgres=# select *  from foo where x=1;
postgres=# select *  from foo where x+x=1;
postgres=# select *  from foo where x+x+x=1;
postgres=# select *  from foo where x+x+x+x=1;
...

and check size of CacheMemoryContext using gdb - it is not increased.

Can you please send me your test?

#71Yamaji, Ryo
yamaji.ryo@jp.fujitsu.com
In reply to: Konstantin Knizhnik (#70)
RE: [HACKERS] Cached plans and statement generalization

-----Original Message-----
From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]
Sent: Wednesday, August 1, 2018 4:53 PM
To: Yamaji, Ryo/山地 亮 <yamaji.ryo@jp.fujitsu.com>
Cc: PostgreSQL mailing lists <pgsql-hackers@postgresql.org>
Subject: Re: [HACKERS] Cached plans and statement generalization

I failed to reproduce the problem.
I used the following non-default configuration parameters:

autoprepare_limit=1
autoprepare_threshold=1

create dummy database:

create table foo(x integer primary key, y integer); insert into foo values
(generate_series(1,10000), 0);

and run different queries,  like:

postgres=# select *  from foo where x=1; postgres=# select *  from foo
where x+x=1; postgres=# select *  from foo where x+x+x=1; postgres=#
select *  from foo where x+x+x+x=1; ...

and check size of CacheMemoryContext using gdb - it is not increased.

Can you please send me your test?

I checked not CacheMemoryContext but "plan cache context".
Because I think that the memory context that autoprepare mainly uses is "plan cache context".

Non-default configuration parameter was used as well as Konstantin.
autoprepare_limit=1
autoprepare_threshold=1

The procedure of the test that I did is shown below.

create dummy database
create table test (key1 integer, key2 integer, ... , key100 integer);
insert into test values (1, 2, ... , 100);

And, different queries are executed.
select key1 from test where key1=1 and key2=2 and ... and key100=100;
select key2 from test where key1=1 and key2=2 and ... and key100=100;
select key3 from test where key1=1 and key2=2 and ... and key100=100;...

And, "plan cache context" was confirmed by using gdb.

#72Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Yamaji, Ryo (#71)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 02.08.2018 08:25, Yamaji, Ryo wrote:

-----Original Message-----
From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]
Sent: Wednesday, August 1, 2018 4:53 PM
To: Yamaji, Ryo/山地 亮 <yamaji.ryo@jp.fujitsu.com>
Cc: PostgreSQL mailing lists <pgsql-hackers@postgresql.org>
Subject: Re: [HACKERS] Cached plans and statement generalization

I failed to reproduce the problem.
I used the following non-default configuration parameters:

autoprepare_limit=1
autoprepare_threshold=1

create dummy database:

create table foo(x integer primary key, y integer); insert into foo values
(generate_series(1,10000), 0);

and run different queries,  like:

postgres=# select *  from foo where x=1; postgres=# select *  from foo
where x+x=1; postgres=# select *  from foo where x+x+x=1; postgres=#
select *  from foo where x+x+x+x=1; ...

and check size of CacheMemoryContext using gdb - it is not increased.

Can you please send me your test?

I checked not CacheMemoryContext but "plan cache context".
Because I think that the memory context that autoprepare mainly uses is "plan cache context".

Non-default configuration parameter was used as well as Konstantin.
autoprepare_limit=1
autoprepare_threshold=1

The procedure of the test that I did is shown below.

create dummy database
create table test (key1 integer, key2 integer, ... , key100 integer);
insert into test values (1, 2, ... , 100);

And, different queries are executed.
select key1 from test where key1=1 and key2=2 and ... and key100=100;
select key2 from test where key1=1 and key2=2 and ... and key100=100;
select key3 from test where key1=1 and key2=2 and ... and key100=100;...

And, "plan cache context" was confirmed by using gdb.

Thank you.
Unfortunately expression_tree_walker is not consistent with copyObject:
I tried to use this walker to destroy raw parse tree of autoprepared
statement, but looks
like some nodes are not visited by expression_tree_walker. I have to
create separate memory context for each autoprepared statement.
New version  of the patch is attached.

Attachments:

autoprepare-10.patchtext/x-patch; name=autoprepare-10.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..b3309bd
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres tries to generalize executed statements
+    and build parameterized plan for them. Speed of execution of
+    autoprepared statements is almost the same as of explicitly
+    prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_tthreshold</varname>.
+    This variable specified minimal number of times the statement should be
+    executed before it is autoprepared. Please notice that, despite to the
+    value of this parameter, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    If number of different statements issued by application is large enough,
+    then autopreparing all of them can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). To prevent growth of backend's memory because of
+    autoprepared cache, it is possible to limit number of autoprepared statements
+    by setting <varname>autoprepare_limit</varname> GUC variable. LRU strategy will be used
+    to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals) and
+    query execution counter.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index fffb79f..26f561d 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8230,6 +8230,11 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
      </row>
 
      <row>
+      <entry><link linkend="view-pg-autoprepared-statements"><structname>pg_autoprepared_statements</structname></link></entry>
+      <entry>autoprepared statements</entry>
+     </row>
+
+     <row>
       <entry><link linkend="view-pg-prepared-xacts"><structname>pg_prepared_xacts</structname></link></entry>
       <entry>prepared transactions</entry>
      </row>
@@ -9541,6 +9546,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   </para>
  </sect1>
 
+
+ <sect1 id="view-pg-autoprepared-statements">
+  <title><structname>pg_autoprepared_statements</structname></title>
+
+  <indexterm zone="view-pg-autoprepared-statements">
+   <primary>pg_autoprepared_statements</primary>
+  </indexterm>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view displays
+   all the autoprepared statements that are available in the current
+   session. See <xref linkend="autoprepare"/> for more information about autoprepared
+   statements.
+  </para>
+
+  <table>
+   <title><structname>pg_autoprepared_statements</structname> Columns</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Name</entry>
+      <entry>Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+    <tbody>
+     <row>
+      <entry><structfield>statement</structfield></entry>
+      <entry><type>text</type></entry>
+      <entry>
+        The query string submitted by the client from which this prepared statement
+        was created. Please notice that literals in this statement are not
+        replaced with prepared statement placeholders.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>parameter_types</structfield></entry>
+      <entry><type>regtype[]</type></entry>
+      <entry>
+       The expected parameter types for the autoprepared statement in the
+       form of an array of <type>regtype</type>. The OID corresponding
+       to an element of this array can be obtained by casting the
+       <type>regtype</type> value to <type>oid</type>.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>exec_count</structfield></entry>
+      <entry><type>int8</type></entry>
+      <entry>
+        Number of times this statement was executed.
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view is read only.
+  </para>
+ </sect1>
+
  <sect1 id="view-pg-prepared-xacts">
   <title><structname>pg_prepared_xacts</structname></title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index f010cd4..43b9457 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 0070603..d9ef137 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 7251552..f721b73 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -291,6 +291,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
 	l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..f7b3581 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 			   const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -797,7 +796,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index a10014f..0286587 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3713,6 +3713,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f413395..9818166 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -41,6 +41,7 @@
 #include "access/xact.h"
 #include "catalog/pg_type.h"
 #include "commands/async.h"
+#include "commands/defrem.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
 #include "jit/jit.h"
@@ -48,6 +49,7 @@
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
+#include "nodes/nodeFuncs.h"
 #include "nodes/print.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
@@ -71,12 +73,15 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -193,10 +198,11 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
-
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -969,6 +975,16 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1868,9 +1884,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1882,17 +1917,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -1955,7 +1979,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4586,6 +4610,768 @@ log_disconnections(int code, Datum arg)
 }
 
 /*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+	MemoryContext     context;    /* memory context used for parse tree of this cache entry */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		if (++autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			MemoryContext victim_context = victim->context;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				DropCachedPlan(victim->plan);
+			}
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			MemoryContextDelete(victim_context);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+		entry->context = AllocSetContextCreate(plan_cache_context,
+											   "AutopreparedStatementConext",
+											   ALLOCSET_START_SMALL_SIZES);
+		MemoryContextSwitchTo(entry->context);
+		entry->parse_tree = copyObject(pattern.parse_tree);
+		entry->param_types = palloc(n_params*sizeof(Oid));
+		memcpy(entry->param_types, pattern.param_types, n_params*sizeof(Oid));
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed not less than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || ++entry->exec_count < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(portal->portalContext);
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false, NULL);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		plan_cache_hash = 0;
+	}
+}
+
+
+/*
  * Start statement timeout timer, if enabled.
  *
  * If there's already a timeout running, don't restart the timer.  That
@@ -4623,3 +5409,89 @@ disable_statement_timeout(void)
 		stmt_timeout_active = false;
 	}
 }
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(3, false);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (plan_cache_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		plan_cache_entry *prep_stmt;
+
+		hash_seq_init(&hash_seq, plan_cache_hash);
+		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (prep_stmt->plan != NULL && !prep_stmt->disable_autoprepare)
+			{
+				Datum		values[3];
+				bool		nulls[3];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(prep_stmt->plan->query_string);
+				values[1] = build_regtype_array(prep_stmt->param_types,
+												prep_stmt->n_params);
+				values[2] = Int64GetDatum(prep_stmt->exec_count);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index b5804f6..901fc86 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -676,10 +676,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -952,6 +954,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 16d86a2..3add869 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -111,6 +111,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -644,6 +645,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index c5ba149..d13a523 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -483,6 +483,10 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2129,6 +2133,28 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index a146510..f9ac66f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7637,6 +7637,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4013', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8}',
+  proargmodes => '{o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..883c39b 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 849f34d..ba97744 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -73,6 +73,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 507d89c..5770c42 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index f462eab..3227781 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -253,6 +253,9 @@ extern PGDLLIMPORT int client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/autoprepare.out b/src/test/regress/expected/autoprepare.out
new file mode 100644
index 0000000..728dab2
--- /dev/null
+++ b/src/test/regress/expected/autoprepare.out
@@ -0,0 +1,55 @@
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          1
+ select * from pg_autoprepared_statements;               | {}              |          1
+(2 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          2
+ select * from pg_autoprepared_statements;               | {}              |          2
+(2 rows)
+
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000..b6101c7
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 078129f..7e7bcdc 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1297,6 +1297,10 @@ mvtest_tvv| SELECT sum(mvtest_tv.totamt) AS grandtot
    FROM mvtest_tv;
 mvtest_tvvmv| SELECT mvtest_tvvm.grandtot
    FROM mvtest_tvvm;
+pg_autoprepared_statements| SELECT p.statement,
+    p.parameter_types,
+    p.exec_count
+   FROM pg_autoprepared_statement() p(statement, parameter_types, exec_count);
 pg_available_extension_versions| SELECT e.name,
     e.version,
     (x.extname IS NOT NULL) AS installed,
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 16f979c..2a92588 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -111,7 +111,7 @@ test: select_views portals_p2 foreign_key cluster dependency guc bitmapops combo
 # NB: temp.sql does a reconnect which transiently uses 2 connections,
 # so keep this parallel group to at most 19 tests
 # ----------
-test: plancache limit plpgsql copy2 temp domain rangefuncs prepare without_oid conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
+test: plancache limit plpgsql copy2 temp domain rangefuncs prepare autoprepare without_oid conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 42632be..02a7237 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -170,6 +170,7 @@ test: temp
 test: domain
 test: rangefuncs
 test: prepare
+test: autoprepare
 test: without_oid
 test: conversion
 test: truncate
diff --git a/src/test/regress/sql/autoprepare.sql b/src/test/regress/sql/autoprepare.sql
new file mode 100644
index 0000000..98e4f7b
--- /dev/null
+++ b/src/test/regress/sql/autoprepare.sql
@@ -0,0 +1,11 @@
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
#73Yamaji, Ryo
yamaji.ryo@jp.fujitsu.com
In reply to: Konstantin Knizhnik (#72)
RE: [HACKERS] Cached plans and statement generalization

-----Original Message-----
From: Konstantin Knizhnik [mailto:k.knizhnik@postgrespro.ru]
Sent: Friday, August 3, 2018 7:02 AM
To: Yamaji, Ryo/山地 亮 <yamaji.ryo@jp.fujitsu.com>
Cc: PostgreSQL mailing lists <pgsql-hackers@postgresql.org>
Subject: Re: [HACKERS] Cached plans and statement generalization

Thank you.
Unfortunately expression_tree_walker is not consistent with copyObject:
I tried to use this walker to destroy raw parse tree of autoprepared
statement, but looks like some nodes are not visited by
expression_tree_walker. I have to create separate memory context for each
autoprepared statement.
New version  of the patch is attached.

Thank you for attaching the patch.
The improvement to plan cache context by the new patch was confirmed.

This patch will be included in next release of PgPro EE.
Concerning next commit fest - I am not sure.
At previous commitfest it was returned with feedback that it "has received no review or comments since last May".
May be your review will help to change this situation.

I want to confirm one point.
If I will have reviewed the autoprepare patch, then are you ready to register
the patch at commit fest in the near future? I fear that autoprepare patch do
not registered at commit fest in the future (for example, you are so busy), and
do not applied to PostgreSQL. If you are not ready to register the patch, I think
I want to register at commit fest instead of you.

I agree it may be more useful to limit amount of memory used by prepare
queries, rather than number of prepared statements.
But it is just more difficult to calculate and maintain (I am not sure
that just looking at CacheMemoryContext is enough for it).
Also, if working set of queries (frequently repeated queries) doesn't
fir in memory, then autoprepare will be almost useless (because with
high probability
prepared query will be thrown away from the cache before it can be
reused). So limiting queries from "application side" seems to be more
practical.

I see. But I fear that autoprepare process uses irregularity amount of memory
when autoprepare_limit is specified number of prepared statements. I think
that there is scene that autoprepare process use a lot of memory (ex. it
need to prepare a lot of long queries), then other processes (ex. other
backend process in PostgreSQL or process other than PostgreSQL) cannot use
memory. I hope to specify limit amount of memory in the future.

#74Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Yamaji, Ryo (#73)
Re: [HACKERS] Cached plans and statement generalization

On 07.08.2018 13:02, Yamaji, Ryo wrote:

I want to confirm one point.
If I will have reviewed the autoprepare patch, then are you ready to register
the patch at commit fest in the near future? I fear that autoprepare patch do
not registered at commit fest in the future (for example, you are so busy), and
do not applied to PostgreSQL. If you are not ready to register the patch, I think
I want to register at commit fest instead of you.

I have registered the patch for next commitfest.
For some reasons it doesn't find the latest autoprepare-10.patch and
still refer to autoprepare-6.patch as the latest attachement.

I agree it may be more useful to limit amount of memory used by prepare
queries, rather than number of prepared statements.
But it is just more difficult to calculate and maintain (I am not sure
that just looking at CacheMemoryContext is enough for it).
Also, if working set of queries (frequently repeated queries) doesn't
fir in memory, then autoprepare will be almost useless (because with
high probability
prepared query will be thrown away from the cache before it can be
reused). So limiting queries from "application side" seems to be more
practical.

I see. But I fear that autoprepare process uses irregularity amount of memory
when autoprepare_limit is specified number of prepared statements. I think
that there is scene that autoprepare process use a lot of memory (ex. it
need to prepare a lot of long queries), then other processes (ex. other
backend process in PostgreSQL or process other than PostgreSQL) cannot use
memory. I hope to specify limit amount of memory in the future.

Right now each prepared statement has two memory contexts: one for raw
parse tree used as hash table key and another for cached plan itself.
May be it will be possible to combine them. To calculate memory consumed
by cached plans, it will be necessary to calculate memory usage
statistic for all this contexts (which requires traversal of all
context's chunks) and sum them. It is much more complex and expensive
than current check: (++autoprepare_cached_plans > autoprepare_limit)
although I so not think that it will have measurable impact on
performance...
May be there should be some faster way to estimate memory consumed by
prepared statements.

So, the current autoprepare_limit allows to limit number of autoprepared
statements and prevent memory overflow caused by execution of larger
number of different statements.
The question is whether we need more precise mechanism which will take
in account difference between small and large queries. Definitely simple
query can require 10-100 times less memory than complex query. But
memory contexts themselves (even with small block size) somehow minimize
difference in memory footprint of different queries, because of chunked
allocation.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#75Yamaji, Ryo
yamaji.ryo@jp.fujitsu.com
In reply to: Konstantin Knizhnik (#74)
RE: [HACKERS] Cached plans and statement generalization

On Tuesday, August 7, 2018 at 0:36 AM, Konstantin Knizhnik wrote:

I have registered the patch for next commitfest.
For some reasons it doesn't find the latest autoprepare-10.patch and still
refer to autoprepare-6.patch as the latest attachement.

I am sorry for the long delay in my response.
I confirmed it and become reviewer the patch.
I am going to review.

Right now each prepared statement has two memory contexts: one for raw
parse tree used as hash table key and another for cached plan itself.
May be it will be possible to combine them. To calculate memory consumed
by cached plans, it will be necessary to calculate memory usage statistic
for all this contexts (which requires traversal of all context's chunks)
and sum them. It is much more complex and expensive than current check:
(++autoprepare_cached_plans > autoprepare_limit) although I so not think
that it will have measurable impact on performance...
May be there should be some faster way to estimate memory consumed by
prepared statements.

So, the current autoprepare_limit allows to limit number of autoprepared
statements and prevent memory overflow caused by execution of larger
number of different statements.
The question is whether we need more precise mechanism which will take
in account difference between small and large queries. Definitely simple
query can require 10-100 times less memory than complex query. But memory
contexts themselves (even with small block size) somehow minimize
difference in memory footprint of different queries, because of chunked
allocation.

Thank you for telling me how to implement and its problem.
In many cases when actually designing a system, we estimate the amount of memory
to use and prepare a memory accordingly. Therefore, if we need to estimate memory
usage in both patterns that are limit amount of memory used by prepare queries or
number of prepared statements, I think that there is no problem with the current
implementation.

Apart from the patch, if it can implement an application that outputs estimates of
memory usage when we enter a query, you may be able to use autoprepare more comfortably.
But it sounds difficult..

#76Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Yamaji, Ryo (#75)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 22.08.2018 07:54, Yamaji, Ryo wrote:

On Tuesday, August 7, 2018 at 0:36 AM, Konstantin Knizhnik wrote:

I have registered the patch for next commitfest.
For some reasons it doesn't find the latest autoprepare-10.patch and still
refer to autoprepare-6.patch as the latest attachement.

I am sorry for the long delay in my response.
I confirmed it and become reviewer the patch.
I am going to review.

Right now each prepared statement has two memory contexts: one for raw
parse tree used as hash table key and another for cached plan itself.
May be it will be possible to combine them. To calculate memory consumed
by cached plans, it will be necessary to calculate memory usage statistic
for all this contexts (which requires traversal of all context's chunks)
and sum them. It is much more complex and expensive than current check:
(++autoprepare_cached_plans > autoprepare_limit) although I so not think
that it will have measurable impact on performance...
May be there should be some faster way to estimate memory consumed by
prepared statements.

So, the current autoprepare_limit allows to limit number of autoprepared
statements and prevent memory overflow caused by execution of larger
number of different statements.
The question is whether we need more precise mechanism which will take
in account difference between small and large queries. Definitely simple
query can require 10-100 times less memory than complex query. But memory
contexts themselves (even with small block size) somehow minimize
difference in memory footprint of different queries, because of chunked
allocation.

Thank you for telling me how to implement and its problem.
In many cases when actually designing a system, we estimate the amount of memory
to use and prepare a memory accordingly. Therefore, if we need to estimate memory
usage in both patterns that are limit amount of memory used by prepare queries or
number of prepared statements, I think that there is no problem with the current
implementation.

Apart from the patch, if it can implement an application that outputs estimates of
memory usage when we enter a query, you may be able to use autoprepare more comfortably.
But it sounds difficult..

I have added autoprepare_memory_limit parameter which allows to limit
memory used by  autoprepared statements rather than number of such
statements.
If value of this parameter is non zero, then  total size of all memory
contexts used for autoprepared statement is maintained (can be
innsoected in debugger in autoprepare_used_memory variable).
As I already noticed, calculating memory context size adds extra
overhead but I do not notice any influence in performance. New version
of the patch is attached.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-11.patchtext/x-patch; name=autoprepare-11.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..b3309bd
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres tries to generalize executed statements
+    and build parameterized plan for them. Speed of execution of
+    autoprepared statements is almost the same as of explicitly
+    prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_tthreshold</varname>.
+    This variable specified minimal number of times the statement should be
+    executed before it is autoprepared. Please notice that, despite to the
+    value of this parameter, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    If number of different statements issued by application is large enough,
+    then autopreparing all of them can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). To prevent growth of backend's memory because of
+    autoprepared cache, it is possible to limit number of autoprepared statements
+    by setting <varname>autoprepare_limit</varname> GUC variable. LRU strategy will be used
+    to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals) and
+    query execution counter.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 3bb48d4..a0c96e8 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8231,6 +8231,11 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
      </row>
 
      <row>
+      <entry><link linkend="view-pg-autoprepared-statements"><structname>pg_autoprepared_statements</structname></link></entry>
+      <entry>autoprepared statements</entry>
+     </row>
+
+     <row>
       <entry><link linkend="view-pg-prepared-xacts"><structname>pg_prepared_xacts</structname></link></entry>
       <entry>prepared transactions</entry>
      </row>
@@ -9542,6 +9547,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   </para>
  </sect1>
 
+
+ <sect1 id="view-pg-autoprepared-statements">
+  <title><structname>pg_autoprepared_statements</structname></title>
+
+  <indexterm zone="view-pg-autoprepared-statements">
+   <primary>pg_autoprepared_statements</primary>
+  </indexterm>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view displays
+   all the autoprepared statements that are available in the current
+   session. See <xref linkend="autoprepare"/> for more information about autoprepared
+   statements.
+  </para>
+
+  <table>
+   <title><structname>pg_autoprepared_statements</structname> Columns</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Name</entry>
+      <entry>Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+    <tbody>
+     <row>
+      <entry><structfield>statement</structfield></entry>
+      <entry><type>text</type></entry>
+      <entry>
+        The query string submitted by the client from which this prepared statement
+        was created. Please notice that literals in this statement are not
+        replaced with prepared statement placeholders.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>parameter_types</structfield></entry>
+      <entry><type>regtype[]</type></entry>
+      <entry>
+       The expected parameter types for the autoprepared statement in the
+       form of an array of <type>regtype</type>. The OID corresponding
+       to an element of this array can be obtained by casting the
+       <type>regtype</type> value to <type>oid</type>.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>exec_count</structfield></entry>
+      <entry><type>int8</type></entry>
+      <entry>
+        Number of times this statement was executed.
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view is read only.
+  </para>
+ </sect1>
+
  <sect1 id="view-pg-prepared-xacts">
   <title><structname>pg_prepared_xacts</structname></title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index f010cd4..43b9457 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 0070603..d9ef137 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 7251552..f721b73 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -291,6 +291,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
 	l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index b945b15..f7b3581 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 			   const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -797,7 +796,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index a10014f..0286587 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3713,6 +3713,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f413395..e16aaef 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -41,6 +41,7 @@
 #include "access/xact.h"
 #include "catalog/pg_type.h"
 #include "commands/async.h"
+#include "commands/defrem.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
 #include "jit/jit.h"
@@ -48,6 +49,7 @@
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
+#include "nodes/nodeFuncs.h"
 #include "nodes/print.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
@@ -71,12 +73,15 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -193,10 +198,11 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
-
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -969,6 +975,16 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1868,9 +1884,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1882,17 +1917,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -1955,7 +1979,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4586,6 +4610,794 @@ log_disconnections(int code, Datum arg)
 }
 
 /*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+	MemoryContext     context;    /* memory context used for parse tree of this cache entry */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+size_t autoprepare_used_memory;
+
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+
+
+static Size
+get_memory_context_size(MemoryContext ctx)
+{
+	MemoryContextCounters counters = {0};
+	ctx->methods->stats(ctx, NULL, NULL, &counters);
+	return counters.totalspace;
+}
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool
+exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		autoprepare_cached_plans += 1;
+		while ((autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+			|| (autoprepare_memory_limit && autoprepare_used_memory > (size_t)autoprepare_memory_limit*1024))
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			MemoryContext victim_context = victim->context;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				if (autoprepare_memory_limit)
+					autoprepare_used_memory -= get_memory_context_size(victim->plan->context);
+				DropCachedPlan(victim->plan);
+			}
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(victim_context);
+			MemoryContextDelete(victim_context);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+		entry->context = AllocSetContextCreate(plan_cache_context,
+											   "AutopreparedStatementConext",
+											   ALLOCSET_START_SMALL_SIZES);
+		MemoryContextSwitchTo(entry->context);
+		entry->parse_tree = copyObject(pattern.parse_tree);
+		entry->param_types = palloc(n_params*sizeof(Oid));
+		memcpy(entry->param_types, pattern.param_types, n_params*sizeof(Oid));
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(entry->context);
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(entry->plan->context);
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed not less than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || ++entry->exec_count < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(psrc->context);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(portal->portalContext);
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false, NULL);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		autoprepare_used_memory = 0;
+		plan_cache_hash = 0;
+	}
+}
+
+
+/*
  * Start statement timeout timer, if enabled.
  *
  * If there's already a timeout running, don't restart the timer.  That
@@ -4623,3 +5435,89 @@ disable_statement_timeout(void)
 		stmt_timeout_active = false;
 	}
 }
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(3, false);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (plan_cache_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		plan_cache_entry *prep_stmt;
+
+		hash_seq_init(&hash_seq, plan_cache_hash);
+		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (prep_stmt->plan != NULL && !prep_stmt->disable_autoprepare)
+			{
+				Datum		values[3];
+				bool		nulls[3];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(prep_stmt->plan->query_string);
+				values[1] = build_regtype_array(prep_stmt->param_types,
+												prep_stmt->n_params);
+				values[2] = Int64GetDatum(prep_stmt->exec_count);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index b5804f6..901fc86 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -676,10 +676,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -952,6 +954,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index 16d86a2..3add869 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -111,6 +111,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -644,6 +645,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index c5ba149..d23a0e0 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -483,6 +483,11 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+int         autoprepare_memory_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2129,6 +2134,39 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_memory_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal size of memory used by autoprepared queries."),
+		 gettext_noop("0 means that there is no memory limit. Calculating memory used by prepared queries adds somme extra overhead, "
+					  "so non-zero value of this parameter may cause some slowdown. autoprepare_limit is much faster way to limit number of autoprepared statements"),
+		 GUC_UNIT_KB
+		},
+		&autoprepare_memory_limit,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index a146510..f9ac66f 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7637,6 +7637,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4013', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8}',
+  proargmodes => '{o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index ffec029..883c39b 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 849f34d..ba97744 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -73,6 +73,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 507d89c..5770c42 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index f462eab..a1b0454 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -253,6 +253,10 @@ extern PGDLLIMPORT int client_min_messages;
 extern int	log_min_duration_statement;
 extern int	log_temp_files;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+extern int  autoprepare_memory_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/autoprepare.out b/src/test/regress/expected/autoprepare.out
new file mode 100644
index 0000000..728dab2
--- /dev/null
+++ b/src/test/regress/expected/autoprepare.out
@@ -0,0 +1,55 @@
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          1
+ select * from pg_autoprepared_statements;               | {}              |          1
+(2 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          2
+ select * from pg_autoprepared_statements;               | {}              |          2
+(2 rows)
+
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000..b6101c7
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 078129f..7e7bcdc 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1297,6 +1297,10 @@ mvtest_tvv| SELECT sum(mvtest_tv.totamt) AS grandtot
    FROM mvtest_tv;
 mvtest_tvvmv| SELECT mvtest_tvvm.grandtot
    FROM mvtest_tvvm;
+pg_autoprepared_statements| SELECT p.statement,
+    p.parameter_types,
+    p.exec_count
+   FROM pg_autoprepared_statement() p(statement, parameter_types, exec_count);
 pg_available_extension_versions| SELECT e.name,
     e.version,
     (x.extname IS NOT NULL) AS installed,
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 16f979c..2a92588 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -111,7 +111,7 @@ test: select_views portals_p2 foreign_key cluster dependency guc bitmapops combo
 # NB: temp.sql does a reconnect which transiently uses 2 connections,
 # so keep this parallel group to at most 19 tests
 # ----------
-test: plancache limit plpgsql copy2 temp domain rangefuncs prepare without_oid conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
+test: plancache limit plpgsql copy2 temp domain rangefuncs prepare autoprepare without_oid conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 42632be..02a7237 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -170,6 +170,7 @@ test: temp
 test: domain
 test: rangefuncs
 test: prepare
+test: autoprepare
 test: without_oid
 test: conversion
 test: truncate
diff --git a/src/test/regress/sql/autoprepare.sql b/src/test/regress/sql/autoprepare.sql
new file mode 100644
index 0000000..98e4f7b
--- /dev/null
+++ b/src/test/regress/sql/autoprepare.sql
@@ -0,0 +1,11 @@
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
#77Yamaji, Ryo
yamaji.ryo@jp.fujitsu.com
In reply to: Konstantin Knizhnik (#74)
RE: [HACKERS] Cached plans and statement generalization

On Tuesday, August 7, 2018 at 0:36 AM, Konstantin Knizhnik wrote:

I have registered the patch for next commitfest.
For some reasons it doesn't find the latest autoprepare-10.patch and still
refer to autoprepare-6.patch as the latest attachement.

I'm sorry for the late reply. I'm currently reviewing autoprepare.
I could not make it in time for the commit fests in September,
but I will continue to review for the next commitfest.

Performance tests are good results. The results are shown below.
pgbench -s 100 -c 8 -t 10000 -S (average of 3 trials)
- all autoprepare statements use same memory context.
18052.22706 TPS
- each autoprepare statement use separate memory context.
18607.95889 TPS
- calculate memory usage (autoprepare_memory_limit)
19171.60457 TPS

From the above results, I think that adding/changing functions
will not affect performance. I am currently checking the behavior
when autoprepare_memory_limit is specified.

#78Dmitry Dolgov
9erthalion6@gmail.com
In reply to: Yamaji, Ryo (#77)
Re: [HACKERS] Cached plans and statement generalization

On Fri, Sep 28, 2018 at 10:46 AM Yamaji, Ryo <yamaji.ryo@jp.fujitsu.com> wrote:

On Tuesday, August 7, 2018 at 0:36 AM, Konstantin Knizhnik wrote:

I have registered the patch for next commitfest.
For some reasons it doesn't find the latest autoprepare-10.patch and still
refer to autoprepare-6.patch as the latest attachement.

I'm sorry for the late reply. I'm currently reviewing autoprepare.
I could not make it in time for the commit fests in September,
but I will continue to review for the next commitfest.

Performance tests are good results. The results are shown below.
pgbench -s 100 -c 8 -t 10000 -S (average of 3 trials)
- all autoprepare statements use same memory context.
18052.22706 TPS
- each autoprepare statement use separate memory context.
18607.95889 TPS
- calculate memory usage (autoprepare_memory_limit)
19171.60457 TPS

From the above results, I think that adding/changing functions
will not affect performance. I am currently checking the behavior
when autoprepare_memory_limit is specified.

Hi,

Thanks for reviewing. Since another CF is about to end, maybe you can post the
full review feedback?

On Tue, Jul 31, 2018 at 11:30 PM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:

Concerning next commit fest - I am not sure.
At previous commitfest it was returned with feedback that it "has
received no review or comments since last May".
May be your review will help to change this situation.

Well, I think that wasn't the only reason, and having some balance here would
be nice. Anyway, thanks for working on this patch!

Unfortunately, the patch needs to be rebased one more time. Also, I see there
were concerns about how reliable this patch is. Just out of curiosity, can you
tell now that from v4 to v11 reliability is not a concern anymore?

(If you don't mind, I'll also CC some of the reviewers who saw previous
versions).

#79Yamaji, Ryo
yamaji.ryo@jp.fujitsu.com
In reply to: Dmitry Dolgov (#78)
RE: [HACKERS] Cached plans and statement generalization

On Fri, Nov 30, 2018 at 3:48 AM, Dmitry Dolgov wrote:

Hi,

Thanks for reviewing. Since another CF is about to end, maybe you can
post the full review feedback?

Since I had been busy with my private work, I couldn't review.
I want to review next commit fest. I am sorry I postponed many times.

#80Dmitry Dolgov
9erthalion6@gmail.com
In reply to: Yamaji, Ryo (#79)
Re: [HACKERS] Cached plans and statement generalization

On Fri, Nov 30, 2018 at 3:06 AM Yamaji, Ryo <yamaji.ryo@jp.fujitsu.com> wrote:

On Fri, Nov 30, 2018 at 3:48 AM, Dmitry Dolgov wrote:

Hi,

Thanks for reviewing. Since another CF is about to end, maybe you can
post the full review feedback?

Since I had been busy with my private work, I couldn't review.
I want to review next commit fest. I am sorry I postponed many times.

No worries, just don't forget to post a review when it will be ready.

#81Nagaura, Ryohei
nagaura.ryohei@jp.fujitsu.com
In reply to: Dmitry Dolgov (#80)
RE: [HACKERS] Cached plans and statement generalization

Hi,

Although I became your reviewer, it seems to be difficult to feedback in this CF.
I continue to review, so would you update your patch please?
Until then I review your current patch.

There is one question.
date_1.out which maybe is copy of date.out includes trailing space and gaps of indent
e.g., line 3368 and 3380 in your current patch have
space in each end of line
different indent.
This is also seen in date.out.
I'm not sure whether it is ok to add new file including the above features just because a existing file includes.
Is it ok?

Best regards,
---------------------
Ryohei Nagaura

#82Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Nagaura, Ryohei (#81)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 29.01.2019 4:38, Nagaura, Ryohei wrote:

Hi,

Although I became your reviewer, it seems to be difficult to feedback in this CF.
I continue to review, so would you update your patch please?
Until then I review your current patch.

There is one question.
date_1.out which maybe is copy of date.out includes trailing space and gaps of indent
e.g., line 3368 and 3380 in your current patch have
space in each end of line
different indent.
This is also seen in date.out.
I'm not sure whether it is ok to add new file including the above features just because a existing file includes.
Is it ok?

Best regards,
---------------------
Ryohei Nagaura

Rebased version of the patch is attached.
Concerning spaces in date.out and date_1.out - it is ok.
This files are just output of test execution and it is not possible to
expect lack of trailing spaces in output of test scripts execution.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-12.patchtext/x-patch; name=autoprepare-12.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..b3309bd
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres tries to generalize executed statements
+    and build parameterized plan for them. Speed of execution of
+    autoprepared statements is almost the same as of explicitly
+    prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_tthreshold</varname>.
+    This variable specified minimal number of times the statement should be
+    executed before it is autoprepared. Please notice that, despite to the
+    value of this parameter, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    If number of different statements issued by application is large enough,
+    then autopreparing all of them can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). To prevent growth of backend's memory because of
+    autoprepared cache, it is possible to limit number of autoprepared statements
+    by setting <varname>autoprepare_limit</varname> GUC variable. LRU strategy will be used
+    to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals) and
+    query execution counter.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index af4d062..def8dce 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8169,6 +8169,11 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
      </row>
 
      <row>
+      <entry><link linkend="view-pg-autoprepared-statements"><structname>pg_autoprepared_statements</structname></link></entry>
+      <entry>autoprepared statements</entry>
+     </row>
+
+     <row>
       <entry><link linkend="view-pg-prepared-xacts"><structname>pg_prepared_xacts</structname></link></entry>
       <entry>prepared transactions</entry>
      </row>
@@ -9480,6 +9485,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   </para>
  </sect1>
 
+
+ <sect1 id="view-pg-autoprepared-statements">
+  <title><structname>pg_autoprepared_statements</structname></title>
+
+  <indexterm zone="view-pg-autoprepared-statements">
+   <primary>pg_autoprepared_statements</primary>
+  </indexterm>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view displays
+   all the autoprepared statements that are available in the current
+   session. See <xref linkend="autoprepare"/> for more information about autoprepared
+   statements.
+  </para>
+
+  <table>
+   <title><structname>pg_autoprepared_statements</structname> Columns</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Name</entry>
+      <entry>Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+    <tbody>
+     <row>
+      <entry><structfield>statement</structfield></entry>
+      <entry><type>text</type></entry>
+      <entry>
+        The query string submitted by the client from which this prepared statement
+        was created. Please notice that literals in this statement are not
+        replaced with prepared statement placeholders.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>parameter_types</structfield></entry>
+      <entry><type>regtype[]</type></entry>
+      <entry>
+       The expected parameter types for the autoprepared statement in the
+       form of an array of <type>regtype</type>. The OID corresponding
+       to an element of this array can be obtained by casting the
+       <type>regtype</type> value to <type>oid</type>.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>exec_count</structfield></entry>
+      <entry><type>int8</type></entry>
+      <entry>
+        Number of times this statement was executed.
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view is read only.
+  </para>
+ </sect1>
+
  <sect1 id="view-pg-prepared-xacts">
   <title><structname>pg_prepared_xacts</structname></title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 5dfdf54..9e3bd03 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 96d196d..4e82052 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index f4d9e9d..cb2fda3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -291,6 +291,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
 	l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index a98c836..cc22ad6 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 			   const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -797,7 +796,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index a806d51..ec72b83 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3727,6 +3727,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index e773f20..a04fc60 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -41,6 +41,7 @@
 #include "access/xact.h"
 #include "catalog/pg_type.h"
 #include "commands/async.h"
+#include "commands/defrem.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
 #include "jit/jit.h"
@@ -48,6 +49,7 @@
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
+#include "nodes/nodeFuncs.h"
 #include "nodes/print.h"
 #include "optimizer/planner.h"
 #include "pgstat.h"
@@ -71,12 +73,15 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -193,10 +198,11 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
-
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -1062,6 +1068,16 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1954,9 +1970,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1968,17 +2003,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -2041,7 +2065,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4687,6 +4711,794 @@ log_disconnections(int code, Datum arg)
 }
 
 /*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+	MemoryContext     context;    /* memory context used for parse tree of this cache entry */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+size_t autoprepare_used_memory;
+
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+
+
+static Size
+get_memory_context_size(MemoryContext ctx)
+{
+	MemoryContextCounters counters = {0};
+	ctx->methods->stats(ctx, NULL, NULL, &counters);
+	return counters.totalspace;
+}
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool
+exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		autoprepare_cached_plans += 1;
+		while ((autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+			|| (autoprepare_memory_limit && autoprepare_used_memory > (size_t)autoprepare_memory_limit*1024))
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			MemoryContext victim_context = victim->context;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				if (autoprepare_memory_limit)
+					autoprepare_used_memory -= get_memory_context_size(victim->plan->context);
+				DropCachedPlan(victim->plan);
+			}
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(victim_context);
+			MemoryContextDelete(victim_context);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+		entry->context = AllocSetContextCreate(plan_cache_context,
+											   "AutopreparedStatementConext",
+											   ALLOCSET_START_SMALL_SIZES);
+		MemoryContextSwitchTo(entry->context);
+		entry->parse_tree = copyObject(pattern.parse_tree);
+		entry->param_types = palloc(n_params*sizeof(Oid));
+		memcpy(entry->param_types, pattern.param_types, n_params*sizeof(Oid));
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(entry->context);
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(entry->plan->context);
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed not less than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || ++entry->exec_count < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(psrc->context);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(portal->portalContext);
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false, NULL);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		autoprepare_used_memory = 0;
+		plan_cache_hash = 0;
+	}
+}
+
+
+/*
  * Start statement timeout timer, if enabled.
  *
  * If there's already a timeout running, don't restart the timer.  That
@@ -4724,3 +5536,89 @@ disable_statement_timeout(void)
 		stmt_timeout_active = false;
 	}
 }
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(3);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (plan_cache_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		plan_cache_entry *prep_stmt;
+
+		hash_seq_init(&hash_seq, plan_cache_hash);
+		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (prep_stmt->plan != NULL && !prep_stmt->disable_autoprepare)
+			{
+				Datum		values[3];
+				bool		nulls[3];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(prep_stmt->plan->query_string);
+				values[1] = build_regtype_array(prep_stmt->param_types,
+												prep_stmt->n_params);
+				values[2] = Int64GetDatum(prep_stmt->exec_count);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 6ec795f..9fd5eb5 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -679,10 +679,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -955,6 +957,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index f09e3a9..6168eee 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -113,6 +113,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -646,6 +647,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index c216ed0..a4bb7ce 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -514,6 +514,11 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+int         autoprepare_memory_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2206,6 +2211,39 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_memory_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal size of memory used by autoprepared queries."),
+		 gettext_noop("0 means that there is no memory limit. Calculating memory used by prepared queries adds somme extra overhead, "
+					  "so non-zero value of this parameter may cause some slowdown. autoprepare_limit is much faster way to limit number of autoprepared statements"),
+		 GUC_UNIT_KB
+		},
+		&autoprepare_memory_limit,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 3ecc2e1..8d51f8a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7506,6 +7506,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4013', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8}',
+  proargmodes => '{o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index a1ba714..913db50 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index a9f76bb..14ba68c 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -76,6 +76,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index bc61149..b3636b3 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index c07e7b9..616c847 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -253,6 +253,10 @@ extern int	log_min_duration_statement;
 extern int	log_temp_files;
 extern double log_statement_sample_rate;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+extern int  autoprepare_memory_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/autoprepare.out b/src/test/regress/expected/autoprepare.out
new file mode 100644
index 0000000..728dab2
--- /dev/null
+++ b/src/test/regress/expected/autoprepare.out
@@ -0,0 +1,55 @@
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          1
+ select * from pg_autoprepared_statements;               | {}              |          1
+(2 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          2
+ select * from pg_autoprepared_statements;               | {}              |          2
+(2 rows)
+
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000..b6101c7
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index e384cd2..86848ca 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1297,6 +1297,10 @@ mvtest_tvv| SELECT sum(mvtest_tv.totamt) AS grandtot
    FROM mvtest_tv;
 mvtest_tvvmv| SELECT mvtest_tvvm.grandtot
    FROM mvtest_tvvm;
+pg_autoprepared_statements| SELECT p.statement,
+    p.parameter_types,
+    p.exec_count
+   FROM pg_autoprepared_statement() p(statement, parameter_types, exec_count);
 pg_available_extension_versions| SELECT e.name,
     e.version,
     (x.extname IS NOT NULL) AS installed,
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index cc0bbf5..80b767f 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -111,7 +111,7 @@ test: select_views portals_p2 foreign_key cluster dependency guc bitmapops combo
 # NB: temp.sql does a reconnect which transiently uses 2 connections,
 # so keep this parallel group to at most 19 tests
 # ----------
-test: plancache limit plpgsql copy2 temp domain rangefuncs prepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
+test: plancache limit plpgsql copy2 temp domain rangefuncs prepare autoprepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 0c10c71..e324fad 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -166,6 +166,7 @@ test: temp
 test: domain
 test: rangefuncs
 test: prepare
+test: autoprepare
 test: conversion
 test: truncate
 test: alter_table
diff --git a/src/test/regress/sql/autoprepare.sql b/src/test/regress/sql/autoprepare.sql
new file mode 100644
index 0000000..98e4f7b
--- /dev/null
+++ b/src/test/regress/sql/autoprepare.sql
@@ -0,0 +1,11 @@
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
#83Nagaura, Ryohei
nagaura.ryohei@jp.fujitsu.com
In reply to: Konstantin Knizhnik (#82)
RE: [HACKERS] Cached plans and statement generalization

Hi,

On Mon, Jan 28, 2019 at 10:46 PM, Konstantin Knizhnik wrote:

Rebased version of the patch is attached.

Thank you for your quick rebase.

This files are just output of test execution and it is not possible to expect lack of
trailing spaces in output of test scripts execution.

I understand this is because regression tests use exact matching.
I learned a lot. Thanks.

Best regards,
---------------------
Ryohei Nagaura

#84Yamaji, Ryo
yamaji.ryo@jp.fujitsu.com
In reply to: Konstantin Knizhnik (#82)
RE: [HACKERS] Cached plans and statement generalization

On Tue, Jan 29, 2019 at 10:46 AM, Konstantin Knizhnik wrote:

Rebased version of the patch is attached.

I'm sorry for the late review.
I confirmed behavior of autoprepare-12.patch. It is summarized below.

・parameter
Expected behavior was shown according to the set value.
However, I think that it is be more kind to describe that autoprepare
hold infinite statements when the setting value of
autoprepare_ (memory_) limit is 0 in the manual.
There is no problem as operation.

・pg_autoprepared_statements
I confirmed that I could refer properly.

・autoprepare cache retention period
I confirmed that autoprepared statements were deleted when the set
statement or the DDL statement was executed. Although it differs from
the explicit prepare statements, it does not matter as a autoprepare.

・performance
This patch does not confirm the basic performance of autoprepare because
it confirmed that there was no performance problem with the previous
patch (autoprepare-11.patch). However, because it was argued that
performance degradation would occur when prepared statements execute to
a partition table, I expected that autoprepare might exhibit similar
behavior, and measured the performance. I also predicted that the
plan_cache_mode setting does not apply to autoprepare, and we also
measured the plan_cache_mode by conditions.
Below results (this result is TPS)

plan_cache_mode simple simple(autoprepare) prepare
auto 130 121 121.5
force_custom_plan 132.5 90.7 122.7
force_generic_plan 126.7 14.7 24.7

Performance degradation was observed when plan_cache_mode was specified
for autoprepare. Is this behavior correct? I do not know why this is the
results.

Below performance test procedure

drop table if exists rt;
create table rt (a int, b int, c int) partition by range (a);
\o /dev/null
select 'create table rt' || x::text || ' partition of rt for values from (' ||
(x)::text || ') to (' || (x+1)::text || ');' from generate_series(0, 1024) x;
\gexec
\o

pgbench -p port -T 60 -c 1 -n -f test.sql (-M prepared) postgres

test.sql
\set a random (0, 1023)
select * from rt where a = :a;

#85Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Yamaji, Ryo (#84)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

Thank you very much for the review!

On 19.03.2019 5:56, Yamaji, Ryo wrote:

On Tue, Jan 29, 2019 at 10:46 AM, Konstantin Knizhnik wrote:

Rebased version of the patch is attached.

I'm sorry for the late review.
I confirmed behavior of autoprepare-12.patch. It is summarized below.

・parameter
Expected behavior was shown according to the set value.
However, I think that it is be more kind to describe that autoprepare
hold infinite statements when the setting value of
autoprepare_ (memory_) limit is 0 in the manual.
There is no problem as operation.

Sorry, I do not completely understand your concern.
Description of autoprepare_ (memory_) limit includes explanation of zero
value:

autoprepare_limit:
0 means unlimited number of autoprepared queries. Too large number of
prepared queries can cause backend memory overflow and slowdown
execution speed (because of increased lookup time.

autoprepare_memory_limit:
0 means that there is no memory limit. Calculating memory used by
prepared queries adds some extra overhead,
so non-zero value of this parameter may cause some slowdown.
autoprepare_limit is much faster way to limit number of autoprepared
statements

Do you think that this descriptions are unclear and should be rewritten?

・pg_autoprepared_statements
I confirmed that I could refer properly.

・autoprepare cache retention period
I confirmed that autoprepared statements were deleted when the set
statement or the DDL statement was executed. Although it differs from
the explicit prepare statements, it does not matter as a autoprepare.

・performance
This patch does not confirm the basic performance of autoprepare because
it confirmed that there was no performance problem with the previous
patch (autoprepare-11.patch). However, because it was argued that
performance degradation would occur when prepared statements execute to
a partition table, I expected that autoprepare might exhibit similar
behavior, and measured the performance. I also predicted that the
plan_cache_mode setting does not apply to autoprepare, and we also
measured the plan_cache_mode by conditions.
Below results (this result is TPS)

plan_cache_mode simple simple(autoprepare) prepare
auto 130 121 121.5
force_custom_plan 132.5 90.7 122.7
force_generic_plan 126.7 14.7 24.7

Performance degradation was observed when plan_cache_mode was specified
for autoprepare. Is this behavior correct? I do not know why this is the
results.

Below performance test procedure

drop table if exists rt;
create table rt (a int, b int, c int) partition by range (a);
\o /dev/null
select 'create table rt' || x::text || ' partition of rt for values from (' ||
(x)::text || ') to (' || (x+1)::text || ');' from generate_series(0, 1024) x;
\gexec
\o

pgbench -p port -T 60 -c 1 -n -f test.sql (-M prepared) postgres

test.sql
\set a random (0, 1023)
select * from rt where a = :a;

Autoprepare is using the same functions from plancache.c so
plan_cache_mode settings affect
autoprepare as well as explicitly preprepared statements.

Below are my results of select-only pgbench:

plan_cache_mode simple simple(autoprepare) prepare
auto 23k 42k 50k
force_custom_plan 23k 24k 26k
force_generic_plan 23k 44k 50k

As you can see force_custom_plan slowdowns both explicitly and
autoprepared statements.
Unfortunately generic plans are not working well with partitioned table
because disabling partition pruning.
At my system result of your query execution is the following:

plan_cache_mode simple simple(autoprepare) prepare
auto 232 220 219
force_custom_plan 234 175 211
 force_generic_plan 230 48 48

The conclusion is that forcing generic plan can cause slowdown of
queries on partitioned tables.
If plan cache mode is not enforced, then standard Postgres strategy of
comparing efficiency of generic and custom plans
works well.

Attached please find rebased version of the patch.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-13.patchtext/x-patch; name=autoprepare-13.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..b3309bd
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres tries to generalize executed statements
+    and build parameterized plan for them. Speed of execution of
+    autoprepared statements is almost the same as of explicitly
+    prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_tthreshold</varname>.
+    This variable specified minimal number of times the statement should be
+    executed before it is autoprepared. Please notice that, despite to the
+    value of this parameter, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    If number of different statements issued by application is large enough,
+    then autopreparing all of them can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). To prevent growth of backend's memory because of
+    autoprepared cache, it is possible to limit number of autoprepared statements
+    by setting <varname>autoprepare_limit</varname> GUC variable. LRU strategy will be used
+    to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals) and
+    query execution counter.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 0fd792f..39a5854 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8209,6 +8209,11 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
      </row>
 
      <row>
+      <entry><link linkend="view-pg-autoprepared-statements"><structname>pg_autoprepared_statements</structname></link></entry>
+      <entry>autoprepared statements</entry>
+     </row>
+
+     <row>
       <entry><link linkend="view-pg-prepared-xacts"><structname>pg_prepared_xacts</structname></link></entry>
       <entry>prepared transactions</entry>
      </row>
@@ -9520,6 +9525,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   </para>
  </sect1>
 
+
+ <sect1 id="view-pg-autoprepared-statements">
+  <title><structname>pg_autoprepared_statements</structname></title>
+
+  <indexterm zone="view-pg-autoprepared-statements">
+   <primary>pg_autoprepared_statements</primary>
+  </indexterm>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view displays
+   all the autoprepared statements that are available in the current
+   session. See <xref linkend="autoprepare"/> for more information about autoprepared
+   statements.
+  </para>
+
+  <table>
+   <title><structname>pg_autoprepared_statements</structname> Columns</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Name</entry>
+      <entry>Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+    <tbody>
+     <row>
+      <entry><structfield>statement</structfield></entry>
+      <entry><type>text</type></entry>
+      <entry>
+        The query string submitted by the client from which this prepared statement
+        was created. Please notice that literals in this statement are not
+        replaced with prepared statement placeholders.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>parameter_types</structfield></entry>
+      <entry><type>regtype[]</type></entry>
+      <entry>
+       The expected parameter types for the autoprepared statement in the
+       form of an array of <type>regtype</type>. The OID corresponding
+       to an element of this array can be obtained by casting the
+       <type>regtype</type> value to <type>oid</type>.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>exec_count</structfield></entry>
+      <entry><type>int8</type></entry>
+      <entry>
+        Number of times this statement was executed.
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view is read only.
+  </para>
+ </sect1>
+
  <sect1 id="view-pg-prepared-xacts">
   <title><structname>pg_prepared_xacts</structname></title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index d383de2..ec2eaec 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5262,9 +5262,57 @@ SELECT * FROM parent WHERE key = 2400;
      </varlistentry>
 
      </variablelist>
+
+     <varlistentry id="guc-autoprepare-threshold" xreflabel="autoprepare_threshold">
+      <term><varname>autoprepare_threshold</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_threshold</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Threshold for autopreparing query. The <varname>autoprepare_threshold</varname> parameter
+         specifies how much times the statement should be executed before generic plan for this statement is generated.
+         Zero value (default) disables autoprepare.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-limit" xreflabel="autoprepare_limit">
+      <term><varname>autoprepare_limit</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_limit</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Maximal number of autoprepared queries.
+         Zero value means unlimited number of autoprepared queries.
+         Too large number of prepared queries can cause backend memory overflow and slowdown execution speed
+         (because of increased lookup time). Default value is <literal>113</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-memory-limit" xreflabel="autoprepare_memory_limit">
+      <term><varname>autoprepare_memory_limit</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_memory_limit</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Maximal size of memory used by autoprepared queries.
+         Zero value (default) means that there is no memory limit.
+         Calculating memory used by prepared queries adds some extra overhead,
+		 so non-zero value of this parameter may cause some slowdown.
+         <varname>autoprepare_limit</varname> is much faster way to limit number of autoprepared statements.
+       </para>
+      </listitem>
+     </varlistentry>
     </sect2>
    </sect1>
-
+ 
    <sect1 id="runtime-config-logging">
     <title>Error Reporting and Logging</title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index a03ea14..e8fb66c 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 96d196d..4e82052 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index d962648..1253bd5 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -291,6 +291,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
 	l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index fc231ca..b1b8b76 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 			   const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -787,7 +786,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 8ed30c0..2c38ae5 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3753,6 +3753,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f9ce3d8..9a84ad3 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -41,6 +41,7 @@
 #include "access/xact.h"
 #include "catalog/pg_type.h"
 #include "commands/async.h"
+#include "commands/defrem.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
 #include "jit/jit.h"
@@ -48,6 +49,7 @@
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
+#include "nodes/nodeFuncs.h"
 #include "nodes/print.h"
 #include "optimizer/optimizer.h"
 #include "pgstat.h"
@@ -71,12 +73,15 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -193,10 +198,11 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
-
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -1062,6 +1068,16 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1945,9 +1961,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1959,17 +1994,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -2032,7 +2056,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4678,6 +4702,794 @@ log_disconnections(int code, Datum arg)
 }
 
 /*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+	MemoryContext     context;    /* memory context used for parse tree of this cache entry */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+size_t autoprepare_used_memory;
+
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+
+
+static Size
+get_memory_context_size(MemoryContext ctx)
+{
+	MemoryContextCounters counters = {0};
+	ctx->methods->stats(ctx, NULL, NULL, &counters);
+	return counters.totalspace;
+}
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool
+exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		autoprepare_cached_plans += 1;
+		while ((autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+			|| (autoprepare_memory_limit && autoprepare_used_memory > (size_t)autoprepare_memory_limit*1024))
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			MemoryContext victim_context = victim->context;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				if (autoprepare_memory_limit)
+					autoprepare_used_memory -= get_memory_context_size(victim->plan->context);
+				DropCachedPlan(victim->plan);
+			}
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(victim_context);
+			MemoryContextDelete(victim_context);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+		entry->context = AllocSetContextCreate(plan_cache_context,
+											   "AutopreparedStatementConext",
+											   ALLOCSET_START_SMALL_SIZES);
+		MemoryContextSwitchTo(entry->context);
+		entry->parse_tree = copyObject(pattern.parse_tree);
+		entry->param_types = palloc(n_params*sizeof(Oid));
+		memcpy(entry->param_types, pattern.param_types, n_params*sizeof(Oid));
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(entry->context);
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(entry->plan->context);
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed not less than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || ++entry->exec_count < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(psrc->context);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(portal->portalContext);
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false, NULL);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		autoprepare_used_memory = 0;
+		plan_cache_hash = 0;
+	}
+}
+
+
+/*
  * Start statement timeout timer, if enabled.
  *
  * If there's already a timeout running, don't restart the timer.  That
@@ -4715,3 +5527,89 @@ disable_statement_timeout(void)
 		stmt_timeout_active = false;
 	}
 }
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(3);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (plan_cache_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		plan_cache_entry *prep_stmt;
+
+		hash_seq_init(&hash_seq, plan_cache_hash);
+		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (prep_stmt->plan != NULL && !prep_stmt->disable_autoprepare)
+			{
+				Datum		values[3];
+				bool		nulls[3];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(prep_stmt->plan->query_string);
+				values[1] = build_regtype_array(prep_stmt->param_types,
+												prep_stmt->n_params);
+				values[2] = Int64GetDatum(prep_stmt->exec_count);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 5053ef0..02909e3 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -679,10 +679,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -955,6 +957,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index f09e3a9..6168eee 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -113,6 +113,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -646,6 +647,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index aa564d1..e18b591 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -530,6 +530,11 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+int         autoprepare_memory_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2218,6 +2223,39 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_memory_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal size of memory used by autoprepared queries."),
+		 gettext_noop("0 means that there is no memory limit. Calculating memory used by prepared queries adds some extra overhead, "
+					  "so non-zero value of this parameter may cause some slowdown. autoprepare_limit is much faster way to limit number of autoprepared statements"),
+		 GUC_UNIT_KB
+		},
+		&autoprepare_memory_limit,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 84120de..f6612fb 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7547,6 +7547,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4035', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8}',
+  proargmodes => '{o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index a1ba714..913db50 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 9875934..a81ca29 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -148,6 +148,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index bc61149..b3636b3 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 2712a77..b982eaa 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -253,6 +253,10 @@ extern int	log_min_duration_statement;
 extern int	log_temp_files;
 extern double log_statement_sample_rate;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+extern int  autoprepare_memory_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/autoprepare.out b/src/test/regress/expected/autoprepare.out
new file mode 100644
index 0000000..728dab2
--- /dev/null
+++ b/src/test/regress/expected/autoprepare.out
@@ -0,0 +1,55 @@
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          1
+ select * from pg_autoprepared_statements;               | {}              |          1
+(2 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          2
+ select * from pg_autoprepared_statements;               | {}              |          2
+(2 rows)
+
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000..b6101c7
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index f104dc4..f856197 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1305,6 +1305,10 @@ mvtest_tvv| SELECT sum(mvtest_tv.totamt) AS grandtot
    FROM mvtest_tv;
 mvtest_tvvmv| SELECT mvtest_tvvm.grandtot
    FROM mvtest_tvvm;
+pg_autoprepared_statements| SELECT p.statement,
+    p.parameter_types,
+    p.exec_count
+   FROM pg_autoprepared_statement() p(statement, parameter_types, exec_count);
 pg_available_extension_versions| SELECT e.name,
     e.version,
     (x.extname IS NOT NULL) AS installed,
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index de4989f..932896b 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -116,7 +116,7 @@ test: json jsonb json_encoding jsonpath jsonb_jsonpath
 # NB: temp.sql does a reconnect which transiently uses 2 connections,
 # so keep this parallel group to at most 19 tests
 # ----------
-test: plancache limit plpgsql copy2 temp domain rangefuncs prepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
+test: plancache limit plpgsql copy2 temp domain rangefuncs prepare autoprepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 175ee26..b660e05 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -171,6 +171,7 @@ test: temp
 test: domain
 test: rangefuncs
 test: prepare
+test: autoprepare
 test: conversion
 test: truncate
 test: alter_table
diff --git a/src/test/regress/sql/autoprepare.sql b/src/test/regress/sql/autoprepare.sql
new file mode 100644
index 0000000..98e4f7b
--- /dev/null
+++ b/src/test/regress/sql/autoprepare.sql
@@ -0,0 +1,11 @@
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
#86Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#85)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

New version of the patch with fixed error of autopreparing CTE queries.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-14.patchtext/x-patch; name=autoprepare-14.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..b3309bd
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres tries to generalize executed statements
+    and build parameterized plan for them. Speed of execution of
+    autoprepared statements is almost the same as of explicitly
+    prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_tthreshold</varname>.
+    This variable specified minimal number of times the statement should be
+    executed before it is autoprepared. Please notice that, despite to the
+    value of this parameter, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    If number of different statements issued by application is large enough,
+    then autopreparing all of them can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). To prevent growth of backend's memory because of
+    autoprepared cache, it is possible to limit number of autoprepared statements
+    by setting <varname>autoprepare_limit</varname> GUC variable. LRU strategy will be used
+    to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals) and
+    query execution counter.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index f4aabf5..1443a34 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8240,6 +8240,11 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
      </row>
 
      <row>
+      <entry><link linkend="view-pg-autoprepared-statements"><structname>pg_autoprepared_statements</structname></link></entry>
+      <entry>autoprepared statements</entry>
+     </row>
+
+     <row>
       <entry><link linkend="view-pg-prepared-xacts"><structname>pg_prepared_xacts</structname></link></entry>
       <entry>prepared transactions</entry>
      </row>
@@ -9551,6 +9556,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   </para>
  </sect1>
 
+
+ <sect1 id="view-pg-autoprepared-statements">
+  <title><structname>pg_autoprepared_statements</structname></title>
+
+  <indexterm zone="view-pg-autoprepared-statements">
+   <primary>pg_autoprepared_statements</primary>
+  </indexterm>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view displays
+   all the autoprepared statements that are available in the current
+   session. See <xref linkend="autoprepare"/> for more information about autoprepared
+   statements.
+  </para>
+
+  <table>
+   <title><structname>pg_autoprepared_statements</structname> Columns</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Name</entry>
+      <entry>Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+    <tbody>
+     <row>
+      <entry><structfield>statement</structfield></entry>
+      <entry><type>text</type></entry>
+      <entry>
+        The query string submitted by the client from which this prepared statement
+        was created. Please notice that literals in this statement are not
+        replaced with prepared statement placeholders.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>parameter_types</structfield></entry>
+      <entry><type>regtype[]</type></entry>
+      <entry>
+       The expected parameter types for the autoprepared statement in the
+       form of an array of <type>regtype</type>. The OID corresponding
+       to an element of this array can be obtained by casting the
+       <type>regtype</type> value to <type>oid</type>.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>exec_count</structfield></entry>
+      <entry><type>int8</type></entry>
+      <entry>
+        Number of times this statement was executed.
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view is read only.
+  </para>
+ </sect1>
+
  <sect1 id="view-pg-prepared-xacts">
   <title><structname>pg_prepared_xacts</structname></title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 2166b99..9476710 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5297,9 +5297,57 @@ SELECT * FROM parent WHERE key = 2400;
      </varlistentry>
 
      </variablelist>
+
+     <varlistentry id="guc-autoprepare-threshold" xreflabel="autoprepare_threshold">
+      <term><varname>autoprepare_threshold</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_threshold</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Threshold for autopreparing query. The <varname>autoprepare_threshold</varname> parameter
+         specifies how much times the statement should be executed before generic plan for this statement is generated.
+         Zero value (default) disables autoprepare.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-limit" xreflabel="autoprepare_limit">
+      <term><varname>autoprepare_limit</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_limit</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Maximal number of autoprepared queries.
+         Zero value means unlimited number of autoprepared queries.
+         Too large number of prepared queries can cause backend memory overflow and slowdown execution speed
+         (because of increased lookup time). Default value is <literal>113</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-memory-limit" xreflabel="autoprepare_memory_limit">
+      <term><varname>autoprepare_memory_limit</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_memory_limit</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Maximal size of memory used by autoprepared queries.
+         Zero value (default) means that there is no memory limit.
+         Calculating memory used by prepared queries adds some extra overhead,
+		 so non-zero value of this parameter may cause some slowdown.
+         <varname>autoprepare_limit</varname> is much faster way to limit number of autoprepared statements.
+       </para>
+      </listitem>
+     </varlistentry>
     </sect2>
    </sect1>
-
+ 
    <sect1 id="runtime-config-logging">
     <title>Error Reporting and Logging</title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index a03ea14..e8fb66c 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 96d196d..4e82052 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3f2a7ef..3f628ea 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -291,6 +291,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
 	l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index fc231ca..b1b8b76 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 			   const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -787,7 +786,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 8ed30c0..2c38ae5 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3753,6 +3753,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f9ce3d8..2c69cfa 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -41,6 +41,7 @@
 #include "access/xact.h"
 #include "catalog/pg_type.h"
 #include "commands/async.h"
+#include "commands/defrem.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
 #include "jit/jit.h"
@@ -48,6 +49,7 @@
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
+#include "nodes/nodeFuncs.h"
 #include "nodes/print.h"
 #include "optimizer/optimizer.h"
 #include "pgstat.h"
@@ -71,12 +73,15 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -193,10 +198,11 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
-
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -1062,6 +1068,16 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1945,9 +1961,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1959,17 +1994,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -2032,7 +2056,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4678,6 +4702,801 @@ log_disconnections(int code, Datum arg)
 }
 
 /*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+	MemoryContext     context;    /* memory context used for parse tree of this cache entry */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+size_t autoprepare_used_memory;
+
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+
+
+static Size
+get_memory_context_size(MemoryContext ctx)
+{
+	MemoryContextCounters counters = {0};
+	ctx->methods->stats(ctx, NULL, NULL, &counters);
+	return counters.totalspace;
+}
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool
+exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		autoprepare_cached_plans += 1;
+		while ((autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+			|| (autoprepare_memory_limit && autoprepare_used_memory > (size_t)autoprepare_memory_limit*1024))
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			MemoryContext victim_context = victim->context;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				if (autoprepare_memory_limit)
+					autoprepare_used_memory -= get_memory_context_size(victim->plan->context);
+				DropCachedPlan(victim->plan);
+			}
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(victim_context);
+			MemoryContextDelete(victim_context);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+		entry->context = AllocSetContextCreate(plan_cache_context,
+											   "AutopreparedStatementConext",
+											   ALLOCSET_START_SMALL_SIZES);
+		MemoryContextSwitchTo(entry->context);
+		entry->parse_tree = copyObject(pattern.parse_tree);
+		entry->param_types = palloc(n_params*sizeof(Oid));
+		memcpy(entry->param_types, pattern.param_types, n_params*sizeof(Oid));
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(entry->context);
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(entry->plan->context);
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed not less than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || ++entry->exec_count < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		/*
+		 * Avoid modificatoin of original parse tree
+		 */
+		MemoryContextSwitchTo(old_context);
+		raw_parse_tree = copyObject(raw_parse_tree);
+		MemoryContextSwitchTo(MessageContext);
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(psrc->context);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(portal->portalContext);
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	cplan = GetCachedPlan(psrc, params, false, NULL);
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		autoprepare_used_memory = 0;
+		plan_cache_hash = 0;
+	}
+}
+
+
+/*
  * Start statement timeout timer, if enabled.
  *
  * If there's already a timeout running, don't restart the timer.  That
@@ -4715,3 +5534,89 @@ disable_statement_timeout(void)
 		stmt_timeout_active = false;
 	}
 }
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(3);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (plan_cache_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		plan_cache_entry *prep_stmt;
+
+		hash_seq_init(&hash_seq, plan_cache_hash);
+		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (prep_stmt->plan != NULL && !prep_stmt->disable_autoprepare)
+			{
+				Datum		values[3];
+				bool		nulls[3];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(prep_stmt->plan->query_string);
+				values[1] = build_regtype_array(prep_stmt->param_types,
+												prep_stmt->n_params);
+				values[2] = Int64GetDatum(prep_stmt->exec_count);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index edf24c4..ccdfd3f 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -679,10 +679,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -959,6 +961,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index f09e3a9..6168eee 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -113,6 +113,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -646,6 +647,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index cd5a65b..53af935 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -530,6 +530,11 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+int         autoprepare_memory_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2238,6 +2243,39 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_memory_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal size of memory used by autoprepared queries."),
+		 gettext_noop("0 means that there is no memory limit. Calculating memory used by prepared queries adds some extra overhead, "
+					  "so non-zero value of this parameter may cause some slowdown. autoprepare_limit is much faster way to limit number of autoprepared statements"),
+		 GUC_UNIT_KB
+		},
+		&autoprepare_memory_limit,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index a7050ed..fe8e821 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7575,6 +7575,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4035', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8}',
+  proargmodes => '{o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index a1ba714..913db50 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 9875934..a81ca29 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -148,6 +148,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index bc61149..b3636b3 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 2712a77..b982eaa 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -253,6 +253,10 @@ extern int	log_min_duration_statement;
 extern int	log_temp_files;
 extern double log_statement_sample_rate;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+extern int  autoprepare_memory_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/autoprepare.out b/src/test/regress/expected/autoprepare.out
new file mode 100644
index 0000000..728dab2
--- /dev/null
+++ b/src/test/regress/expected/autoprepare.out
@@ -0,0 +1,55 @@
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          1
+ select * from pg_autoprepared_statements;               | {}              |          1
+(2 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          2
+ select * from pg_autoprepared_statements;               | {}              |          2
+(2 rows)
+
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000..b6101c7
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index dae772d..cbcd65f 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1305,6 +1305,10 @@ mvtest_tvv| SELECT sum(mvtest_tv.totamt) AS grandtot
    FROM mvtest_tv;
 mvtest_tvvmv| SELECT mvtest_tvvm.grandtot
    FROM mvtest_tvvm;
+pg_autoprepared_statements| SELECT p.statement,
+    p.parameter_types,
+    p.exec_count
+   FROM pg_autoprepared_statement() p(statement, parameter_types, exec_count);
 pg_available_extension_versions| SELECT e.name,
     e.version,
     (x.extname IS NOT NULL) AS installed,
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index f320fb6..a45162a 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -116,7 +116,7 @@ test: json jsonb json_encoding jsonpath jsonpath_encoding jsonb_jsonpath
 # NB: temp.sql does a reconnect which transiently uses 2 connections,
 # so keep this parallel group to at most 19 tests
 # ----------
-test: plancache limit plpgsql copy2 temp domain rangefuncs prepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
+test: plancache limit plpgsql copy2 temp domain rangefuncs prepare autoprepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 36644aa..fcd44e6 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -173,6 +173,7 @@ test: temp
 test: domain
 test: rangefuncs
 test: prepare
+test: autoprepare
 test: conversion
 test: truncate
 test: alter_table
diff --git a/src/test/regress/sql/autoprepare.sql b/src/test/regress/sql/autoprepare.sql
new file mode 100644
index 0000000..98e4f7b
--- /dev/null
+++ b/src/test/regress/sql/autoprepare.sql
@@ -0,0 +1,11 @@
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
#87Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#86)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

New version of the patching disabling autoprepare for rules and handling
planner error.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-15.patchtext/x-patch; name=autoprepare-15.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..b3309bd
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres tries to generalize executed statements
+    and build parameterized plan for them. Speed of execution of
+    autoprepared statements is almost the same as of explicitly
+    prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_tthreshold</varname>.
+    This variable specified minimal number of times the statement should be
+    executed before it is autoprepared. Please notice that, despite to the
+    value of this parameter, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    If number of different statements issued by application is large enough,
+    then autopreparing all of them can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). To prevent growth of backend's memory because of
+    autoprepared cache, it is possible to limit number of autoprepared statements
+    by setting <varname>autoprepare_limit</varname> GUC variable. LRU strategy will be used
+    to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals) and
+    query execution counter.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index f4aabf5..1443a34 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8240,6 +8240,11 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
      </row>
 
      <row>
+      <entry><link linkend="view-pg-autoprepared-statements"><structname>pg_autoprepared_statements</structname></link></entry>
+      <entry>autoprepared statements</entry>
+     </row>
+
+     <row>
       <entry><link linkend="view-pg-prepared-xacts"><structname>pg_prepared_xacts</structname></link></entry>
       <entry>prepared transactions</entry>
      </row>
@@ -9551,6 +9556,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   </para>
  </sect1>
 
+
+ <sect1 id="view-pg-autoprepared-statements">
+  <title><structname>pg_autoprepared_statements</structname></title>
+
+  <indexterm zone="view-pg-autoprepared-statements">
+   <primary>pg_autoprepared_statements</primary>
+  </indexterm>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view displays
+   all the autoprepared statements that are available in the current
+   session. See <xref linkend="autoprepare"/> for more information about autoprepared
+   statements.
+  </para>
+
+  <table>
+   <title><structname>pg_autoprepared_statements</structname> Columns</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Name</entry>
+      <entry>Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+    <tbody>
+     <row>
+      <entry><structfield>statement</structfield></entry>
+      <entry><type>text</type></entry>
+      <entry>
+        The query string submitted by the client from which this prepared statement
+        was created. Please notice that literals in this statement are not
+        replaced with prepared statement placeholders.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>parameter_types</structfield></entry>
+      <entry><type>regtype[]</type></entry>
+      <entry>
+       The expected parameter types for the autoprepared statement in the
+       form of an array of <type>regtype</type>. The OID corresponding
+       to an element of this array can be obtained by casting the
+       <type>regtype</type> value to <type>oid</type>.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>exec_count</structfield></entry>
+      <entry><type>int8</type></entry>
+      <entry>
+        Number of times this statement was executed.
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view is read only.
+  </para>
+ </sect1>
+
  <sect1 id="view-pg-prepared-xacts">
   <title><structname>pg_prepared_xacts</structname></title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 2166b99..9476710 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5297,9 +5297,57 @@ SELECT * FROM parent WHERE key = 2400;
      </varlistentry>
 
      </variablelist>
+
+     <varlistentry id="guc-autoprepare-threshold" xreflabel="autoprepare_threshold">
+      <term><varname>autoprepare_threshold</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_threshold</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Threshold for autopreparing query. The <varname>autoprepare_threshold</varname> parameter
+         specifies how much times the statement should be executed before generic plan for this statement is generated.
+         Zero value (default) disables autoprepare.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-limit" xreflabel="autoprepare_limit">
+      <term><varname>autoprepare_limit</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_limit</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Maximal number of autoprepared queries.
+         Zero value means unlimited number of autoprepared queries.
+         Too large number of prepared queries can cause backend memory overflow and slowdown execution speed
+         (because of increased lookup time). Default value is <literal>113</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-memory-limit" xreflabel="autoprepare_memory_limit">
+      <term><varname>autoprepare_memory_limit</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_memory_limit</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Maximal size of memory used by autoprepared queries.
+         Zero value (default) means that there is no memory limit.
+         Calculating memory used by prepared queries adds some extra overhead,
+		 so non-zero value of this parameter may cause some slowdown.
+         <varname>autoprepare_limit</varname> is much faster way to limit number of autoprepared statements.
+       </para>
+      </listitem>
+     </varlistentry>
     </sect2>
    </sect1>
-
+ 
    <sect1 id="runtime-config-logging">
     <title>Error Reporting and Logging</title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index a03ea14..e8fb66c 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 96d196d..4e82052 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 3f2a7ef..3f628ea 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -291,6 +291,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
 	l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index fc231ca..b1b8b76 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 			   const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -787,7 +786,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 8ed30c0..2c38ae5 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3753,6 +3753,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index f9ce3d8..8ec1471 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -41,6 +41,7 @@
 #include "access/xact.h"
 #include "catalog/pg_type.h"
 #include "commands/async.h"
+#include "commands/defrem.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
 #include "jit/jit.h"
@@ -48,6 +49,7 @@
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
+#include "nodes/nodeFuncs.h"
 #include "nodes/print.h"
 #include "optimizer/optimizer.h"
 #include "pgstat.h"
@@ -71,12 +73,15 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -193,10 +198,11 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
-
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -1062,6 +1068,16 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1945,9 +1961,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1959,17 +1994,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -2016,7 +2040,8 @@ exec_execute_message(const char *portal_name, long max_rows)
 	/*
 	 * Report query to various monitoring facilities.
 	 */
-	debug_query_string = sourceText;
+	if (!debug_query_string)
+		debug_query_string = sourceText;
 
 	pgstat_report_activity(STATE_RUNNING, sourceText);
 
@@ -2032,7 +2057,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4678,6 +4703,836 @@ log_disconnections(int code, Datum arg)
 }
 
 /*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+	MemoryContext     context;    /* memory context used for parse tree of this cache entry */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+size_t autoprepare_used_memory;
+
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+
+
+static Size
+get_memory_context_size(MemoryContext ctx)
+{
+	MemoryContextCounters counters = {0};
+	ctx->methods->stats(ctx, NULL, NULL, &counters);
+	return counters.totalspace;
+}
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool
+exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+	bool              use_existed_plan;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		autoprepare_cached_plans += 1;
+		while ((autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+			|| (autoprepare_memory_limit && autoprepare_used_memory > (size_t)autoprepare_memory_limit*1024))
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			MemoryContext victim_context = victim->context;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				if (autoprepare_memory_limit)
+					autoprepare_used_memory -= get_memory_context_size(victim->plan->context);
+				DropCachedPlan(victim->plan);
+			}
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(victim_context);
+			MemoryContextDelete(victim_context);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+		entry->context = AllocSetContextCreate(plan_cache_context,
+											   "AutopreparedStatementConext",
+											   ALLOCSET_START_SMALL_SIZES);
+		MemoryContextSwitchTo(entry->context);
+		entry->parse_tree = copyObject(pattern.parse_tree);
+		entry->param_types = palloc(n_params*sizeof(Oid));
+		memcpy(entry->param_types, pattern.param_types, n_params*sizeof(Oid));
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(entry->context);
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(entry->plan->context);
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed not less than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || ++entry->exec_count < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		/*
+		 * Avoid modificatoin of original parse tree
+		 */
+		MemoryContextSwitchTo(old_context);
+		raw_parse_tree = copyObject(raw_parse_tree);
+		MemoryContextSwitchTo(MessageContext);
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		if (querytree_list->length != 1)
+		{
+			if (snapshot_set)
+				PopActiveSnapshot();
+			entry->disable_autoprepare = true;
+			autoprepare_misses += 1;
+			MemoryContextSwitchTo(old_context);
+			return false;
+		}
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(psrc->context);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+		use_existed_plan = false;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		use_existed_plan = true;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(portal->portalContext);
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	PG_TRY();
+	{
+		cplan = GetCachedPlan(psrc, params, false, NULL);
+	}
+	PG_CATCH();
+	{
+		/*
+		 * In case of planner errors revert back to original query processing
+		 * and disable autoprepare for this query to avoid such problems in future.
+		 */
+		FlushErrorState();
+
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		if (use_existed_plan)
+			undo_query_plan_changes(binding_list);
+
+		entry->disable_autoprepare = true;
+		autoprepare_misses += 1;
+		PortalDrop(portal, false);
+		return false;
+	}
+	PG_END_TRY();
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		autoprepare_used_memory = 0;
+		plan_cache_hash = 0;
+	}
+}
+
+
+/*
  * Start statement timeout timer, if enabled.
  *
  * If there's already a timeout running, don't restart the timer.  That
@@ -4715,3 +5570,89 @@ disable_statement_timeout(void)
 		stmt_timeout_active = false;
 	}
 }
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(3);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (plan_cache_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		plan_cache_entry *prep_stmt;
+
+		hash_seq_init(&hash_seq, plan_cache_hash);
+		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (prep_stmt->plan != NULL && !prep_stmt->disable_autoprepare)
+			{
+				Datum		values[3];
+				bool		nulls[3];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(prep_stmt->plan->query_string);
+				values[1] = build_regtype_array(prep_stmt->param_types,
+												prep_stmt->n_params);
+				values[2] = Int64GetDatum(prep_stmt->exec_count);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index edf24c4..ccdfd3f 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -679,10 +679,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -959,6 +961,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index f09e3a9..6168eee 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -113,6 +113,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -646,6 +647,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index cd5a65b..53af935 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -530,6 +530,11 @@ int			tcp_keepalives_idle;
 int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+int         autoprepare_memory_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2238,6 +2243,39 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_memory_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal size of memory used by autoprepared queries."),
+		 gettext_noop("0 means that there is no memory limit. Calculating memory used by prepared queries adds some extra overhead, "
+					  "so non-zero value of this parameter may cause some slowdown. autoprepare_limit is much faster way to limit number of autoprepared statements"),
+		 GUC_UNIT_KB
+		},
+		&autoprepare_memory_limit,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index a7050ed..fe8e821 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7575,6 +7575,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4035', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8}',
+  proargmodes => '{o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index a1ba714..913db50 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 9875934..a81ca29 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -148,6 +148,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index bc61149..b3636b3 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 			   long count,
 			   DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 2712a77..b982eaa 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -253,6 +253,10 @@ extern int	log_min_duration_statement;
 extern int	log_temp_files;
 extern double log_statement_sample_rate;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+extern int  autoprepare_memory_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/autoprepare.out b/src/test/regress/expected/autoprepare.out
new file mode 100644
index 0000000..728dab2
--- /dev/null
+++ b/src/test/regress/expected/autoprepare.out
@@ -0,0 +1,55 @@
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          1
+ select * from pg_autoprepared_statements;               | {}              |          1
+(2 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          2
+ select * from pg_autoprepared_statements;               | {}              |          2
+(2 rows)
+
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000..b6101c7
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index dae772d..cbcd65f 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1305,6 +1305,10 @@ mvtest_tvv| SELECT sum(mvtest_tv.totamt) AS grandtot
    FROM mvtest_tv;
 mvtest_tvvmv| SELECT mvtest_tvvm.grandtot
    FROM mvtest_tvvm;
+pg_autoprepared_statements| SELECT p.statement,
+    p.parameter_types,
+    p.exec_count
+   FROM pg_autoprepared_statement() p(statement, parameter_types, exec_count);
 pg_available_extension_versions| SELECT e.name,
     e.version,
     (x.extname IS NOT NULL) AS installed,
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index f320fb6..a45162a 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -116,7 +116,7 @@ test: json jsonb json_encoding jsonpath jsonpath_encoding jsonb_jsonpath
 # NB: temp.sql does a reconnect which transiently uses 2 connections,
 # so keep this parallel group to at most 19 tests
 # ----------
-test: plancache limit plpgsql copy2 temp domain rangefuncs prepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
+test: plancache limit plpgsql copy2 temp domain rangefuncs prepare autoprepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index 36644aa..fcd44e6 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -173,6 +173,7 @@ test: temp
 test: domain
 test: rangefuncs
 test: prepare
+test: autoprepare
 test: conversion
 test: truncate
 test: alter_table
diff --git a/src/test/regress/sql/autoprepare.sql b/src/test/regress/sql/autoprepare.sql
new file mode 100644
index 0000000..98e4f7b
--- /dev/null
+++ b/src/test/regress/sql/autoprepare.sql
@@ -0,0 +1,11 @@
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
#88Thomas Munro
thomas.munro@gmail.com
In reply to: Konstantin Knizhnik (#87)
Re: [HACKERS] Cached plans and statement generalization

On Wed, Apr 10, 2019 at 12:52 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

New version of the patching disabling autoprepare for rules and handling
planner error.

Hi Konstantin,

This doesn't apply. Could we please have a fresh rebase for the new Commitfest?

Thanks,

--
Thomas Munro
https://enterprisedb.com

#89Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#88)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 01.07.2019 12:51, Thomas Munro wrote:

On Wed, Apr 10, 2019 at 12:52 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

New version of the patching disabling autoprepare for rules and handling
planner error.

Hi Konstantin,

This doesn't apply. Could we please have a fresh rebase for the new Commitfest?

Thanks,

Attached please find rebased version of the patch.
Also this version can be found in autoprepare branch of this repository
https://github.com/postgrespro/postgresql.builtin_pool.git
on github.

Attachments:

autoprepare-16.patchtext/x-patch; name=autoprepare-16.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..b3309bd
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres tries to generalize executed statements
+    and build parameterized plan for them. Speed of execution of
+    autoprepared statements is almost the same as of explicitly
+    prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_tthreshold</varname>.
+    This variable specified minimal number of times the statement should be
+    executed before it is autoprepared. Please notice that, despite to the
+    value of this parameter, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    If number of different statements issued by application is large enough,
+    then autopreparing all of them can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). To prevent growth of backend's memory because of
+    autoprepared cache, it is possible to limit number of autoprepared statements
+    by setting <varname>autoprepare_limit</varname> GUC variable. LRU strategy will be used
+    to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals) and
+    query execution counter.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index f2b9d40..cb703f2 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8314,6 +8314,11 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
      </row>
 
      <row>
+      <entry><link linkend="view-pg-autoprepared-statements"><structname>pg_autoprepared_statements</structname></link></entry>
+      <entry>autoprepared statements</entry>
+     </row>
+
+     <row>
       <entry><link linkend="view-pg-prepared-xacts"><structname>pg_prepared_xacts</structname></link></entry>
       <entry>prepared transactions</entry>
      </row>
@@ -9630,6 +9635,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   </para>
  </sect1>
 
+
+ <sect1 id="view-pg-autoprepared-statements">
+  <title><structname>pg_autoprepared_statements</structname></title>
+
+  <indexterm zone="view-pg-autoprepared-statements">
+   <primary>pg_autoprepared_statements</primary>
+  </indexterm>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view displays
+   all the autoprepared statements that are available in the current
+   session. See <xref linkend="autoprepare"/> for more information about autoprepared
+   statements.
+  </para>
+
+  <table>
+   <title><structname>pg_autoprepared_statements</structname> Columns</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Name</entry>
+      <entry>Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+    <tbody>
+     <row>
+      <entry><structfield>statement</structfield></entry>
+      <entry><type>text</type></entry>
+      <entry>
+        The query string submitted by the client from which this prepared statement
+        was created. Please notice that literals in this statement are not
+        replaced with prepared statement placeholders.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>parameter_types</structfield></entry>
+      <entry><type>regtype[]</type></entry>
+      <entry>
+       The expected parameter types for the autoprepared statement in the
+       form of an array of <type>regtype</type>. The OID corresponding
+       to an element of this array can be obtained by casting the
+       <type>regtype</type> value to <type>oid</type>.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>exec_count</structfield></entry>
+      <entry><type>int8</type></entry>
+      <entry>
+        Number of times this statement was executed.
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view is read only.
+  </para>
+ </sect1>
+
  <sect1 id="view-pg-prepared-xacts">
   <title><structname>pg_prepared_xacts</structname></title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a3..fcbb68b 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5327,9 +5327,57 @@ SELECT * FROM parent WHERE key = 2400;
      </varlistentry>
 
      </variablelist>
+
+     <varlistentry id="guc-autoprepare-threshold" xreflabel="autoprepare_threshold">
+      <term><varname>autoprepare_threshold</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_threshold</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Threshold for autopreparing query. The <varname>autoprepare_threshold</varname> parameter
+         specifies how much times the statement should be executed before generic plan for this statement is generated.
+         Zero value (default) disables autoprepare.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-limit" xreflabel="autoprepare_limit">
+      <term><varname>autoprepare_limit</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_limit</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Maximal number of autoprepared queries.
+         Zero value means unlimited number of autoprepared queries.
+         Too large number of prepared queries can cause backend memory overflow and slowdown execution speed
+         (because of increased lookup time). Default value is <literal>113</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-memory-limit" xreflabel="autoprepare_memory_limit">
+      <term><varname>autoprepare_memory_limit</varname> (<type>integer/type>)
+      <indexterm>
+       <primary><varname>autoprepare_memory_limit</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Maximal size of memory used by autoprepared queries.
+         Zero value (default) means that there is no memory limit.
+         Calculating memory used by prepared queries adds some extra overhead,
+		 so non-zero value of this parameter may cause some slowdown.
+         <varname>autoprepare_limit</varname> is much faster way to limit number of autoprepared statements.
+       </para>
+      </listitem>
+     </varlistentry>
     </sect2>
    </sect1>
-
+ 
    <sect1 id="runtime-config-logging">
     <title>Error Reporting and Logging</title>
 
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f112..13f2f6d 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ab9f279 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index ea4c85e..f5b9481 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -332,6 +332,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
     l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7..5d25b2e 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 									const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -787,7 +786,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 05ae73f..176a2a7 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3753,6 +3753,454 @@ raw_expression_tree_walker(Node *node,
 }
 
 /*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
+/*
  * planstate_tree_walker --- walk plan state trees
  *
  * The walker has already visited the current node, and so we need only
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1..2c760bd 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -41,6 +41,7 @@
 #include "access/xact.h"
 #include "catalog/pg_type.h"
 #include "commands/async.h"
+#include "commands/defrem.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
 #include "jit/jit.h"
@@ -48,6 +49,7 @@
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
+#include "nodes/nodeFuncs.h"
 #include "nodes/print.h"
 #include "optimizer/optimizer.h"
 #include "pgstat.h"
@@ -71,12 +73,15 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -193,10 +198,11 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
-
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -1062,6 +1068,16 @@ exec_simple_query(const char *query_string)
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
 	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
+	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
 	foreach(parsetree_item, parsetree_list)
@@ -1945,9 +1961,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1959,17 +1994,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -2016,7 +2040,8 @@ exec_execute_message(const char *portal_name, long max_rows)
 	/*
 	 * Report query to various monitoring facilities.
 	 */
-	debug_query_string = sourceText;
+	if (!debug_query_string)
+		debug_query_string = sourceText;
 
 	pgstat_report_activity(STATE_RUNNING, sourceText);
 
@@ -2032,7 +2057,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4680,6 +4705,836 @@ log_disconnections(int code, Datum arg)
 }
 
 /*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+	MemoryContext     context;    /* memory context used for parse tree of this cache entry */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+size_t autoprepare_used_memory;
+
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+
+
+static Size
+get_memory_context_size(MemoryContext ctx)
+{
+	MemoryContextCounters counters = {0};
+	ctx->methods->stats(ctx, NULL, NULL, &counters);
+	return counters.totalspace;
+}
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool
+exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+	bool              use_existed_plan;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		autoprepare_cached_plans += 1;
+		while ((autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+			|| (autoprepare_memory_limit && autoprepare_used_memory > (size_t)autoprepare_memory_limit*1024))
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			MemoryContext victim_context = victim->context;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				if (autoprepare_memory_limit)
+					autoprepare_used_memory -= get_memory_context_size(victim->plan->context);
+				DropCachedPlan(victim->plan);
+			}
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(victim_context);
+			MemoryContextDelete(victim_context);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+		entry->context = AllocSetContextCreate(plan_cache_context,
+											   "AutopreparedStatementConext",
+											   ALLOCSET_START_SMALL_SIZES);
+		MemoryContextSwitchTo(entry->context);
+		entry->parse_tree = copyObject(pattern.parse_tree);
+		entry->param_types = palloc(n_params*sizeof(Oid));
+		memcpy(entry->param_types, pattern.param_types, n_params*sizeof(Oid));
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(entry->context);
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(entry->plan->context);
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed not less than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || ++entry->exec_count < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		/*
+		 * Avoid modificatoin of original parse tree
+		 */
+		MemoryContextSwitchTo(old_context);
+		raw_parse_tree = copyObject(raw_parse_tree);
+		MemoryContextSwitchTo(MessageContext);
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		if (querytree_list->length != 1)
+		{
+			if (snapshot_set)
+				PopActiveSnapshot();
+			entry->disable_autoprepare = true;
+			autoprepare_misses += 1;
+			MemoryContextSwitchTo(old_context);
+			return false;
+		}
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(psrc->context);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+		use_existed_plan = false;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		use_existed_plan = true;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(portal->portalContext);
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	PG_TRY();
+	{
+		cplan = GetCachedPlan(psrc, params, false, NULL);
+	}
+	PG_CATCH();
+	{
+		/*
+		 * In case of planner errors revert back to original query processing
+		 * and disable autoprepare for this query to avoid such problems in future.
+		 */
+		FlushErrorState();
+
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		if (use_existed_plan)
+			undo_query_plan_changes(binding_list);
+
+		entry->disable_autoprepare = true;
+		autoprepare_misses += 1;
+		PortalDrop(portal, false);
+		return false;
+	}
+	PG_END_TRY();
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		autoprepare_used_memory = 0;
+		plan_cache_hash = 0;
+	}
+}
+
+
+/*
  * Start statement timeout timer, if enabled.
  *
  * If there's already a timeout running, don't restart the timer.  That
@@ -4717,3 +5572,89 @@ disable_statement_timeout(void)
 		stmt_timeout_active = false;
 	}
 }
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(3);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (plan_cache_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		plan_cache_entry *prep_stmt;
+
+		hash_seq_init(&hash_seq, plan_cache_hash);
+		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (prep_stmt->plan != NULL && !prep_stmt->disable_autoprepare)
+			{
+				Datum		values[3];
+				bool		nulls[3];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(prep_stmt->plan->query_string);
+				values[1] = build_regtype_array(prep_stmt->param_types,
+												prep_stmt->n_params);
+				values[2] = Int64GetDatum(prep_stmt->exec_count);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 05ec7f3..604b0d0 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -679,10 +679,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -966,6 +968,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index f09e3a9..6168eee 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -113,6 +113,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -646,6 +647,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee..45aafdc 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -534,6 +534,11 @@ int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 int			tcp_user_timeout;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+int         autoprepare_memory_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2268,6 +2273,39 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_memory_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal size of memory used by autoprepared queries."),
+		 gettext_noop("0 means that there is no memory limit. Calculating memory used by prepared queries adds some extra overhead, "
+					  "so non-zero value of this parameter may cause some slowdown. autoprepare_limit is much faster way to limit number of autoprepared statements"),
+		 GUC_UNIT_KB
+		},
+		&autoprepare_memory_limit,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 8733524..df1ed4e 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7577,6 +7577,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4035', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8}',
+  proargmodes => '{o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index 2ce8324..11322ce 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 0cb931c..962c171 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -148,6 +148,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 9e24cfa..db3e448 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 							 long count,
 							 DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index e709177..eecedd5 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -256,6 +256,10 @@ extern int	log_temp_files;
 extern double log_statement_sample_rate;
 extern double log_xact_sample_rate;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+extern int  autoprepare_memory_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/autoprepare.out b/src/test/regress/expected/autoprepare.out
new file mode 100644
index 0000000..728dab2
--- /dev/null
+++ b/src/test/regress/expected/autoprepare.out
@@ -0,0 +1,55 @@
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          1
+ select * from pg_autoprepared_statements;               | {}              |          1
+(2 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          2
+ select * from pg_autoprepared_statements;               | {}              |          2
+(2 rows)
+
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000..b6101c7
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 210e9cd..e553a69 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1307,6 +1307,10 @@ mvtest_tvv| SELECT sum(mvtest_tv.totamt) AS grandtot
    FROM mvtest_tv;
 mvtest_tvvmv| SELECT mvtest_tvvm.grandtot
    FROM mvtest_tvvm;
+pg_autoprepared_statements| SELECT p.statement,
+    p.parameter_types,
+    p.exec_count
+   FROM pg_autoprepared_statement() p(statement, parameter_types, exec_count);
 pg_available_extension_versions| SELECT e.name,
     e.version,
     (x.extname IS NOT NULL) AS installed,
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 8fb55f0..35926d9 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -106,7 +106,7 @@ test: json jsonb json_encoding jsonpath jsonpath_encoding jsonb_jsonpath
 # NB: temp.sql does a reconnect which transiently uses 2 connections,
 # so keep this parallel group to at most 19 tests
 # ----------
-test: plancache limit plpgsql copy2 temp domain rangefuncs prepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
+test: plancache limit plpgsql copy2 temp domain rangefuncs prepare autoprepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index a39ca10..912bbc6 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -173,6 +173,7 @@ test: temp
 test: domain
 test: rangefuncs
 test: prepare
+test: autoprepare
 test: conversion
 test: truncate
 test: alter_table
diff --git a/src/test/regress/sql/autoprepare.sql b/src/test/regress/sql/autoprepare.sql
new file mode 100644
index 0000000..98e4f7b
--- /dev/null
+++ b/src/test/regress/sql/autoprepare.sql
@@ -0,0 +1,11 @@
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
#90Thomas Munro
thomas.munro@gmail.com
In reply to: Konstantin Knizhnik (#89)
Re: [HACKERS] Cached plans and statement generalization

On Tue, Jul 2, 2019 at 3:29 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Attached please find rebased version of the patch.
Also this version can be found in autoprepare branch of this repository
https://github.com/postgrespro/postgresql.builtin_pool.git
on github.

Thanks. I haven't looked at the code but this seems like interesting
work and I hope it will get some review. I guess this is bound to use
a lot of memory. I guess we'd eventually want to figure out how to
share the autoprepared plan cache between sessions, which is obviously
a whole can of worms.

A couple of trivial comments with my CF manager hat on:

1. Can you please fix the documentation? It doesn't build.
Obviously reviewing the goals, design and implementation are more
important than the documentation at this point, but if that is fixed
then the CF bot will be able to run check-world every day and we might
learn something about the code.
2. Accidental editor junk included: src/include/catalog/pg_proc.dat.~1~

--
Thomas Munro
https://enterprisedb.com

#91Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#90)
Re: [HACKERS] Cached plans and statement generalization

On 08.07.2019 2:23, Thomas Munro wrote:

On Tue, Jul 2, 2019 at 3:29 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Attached please find rebased version of the patch.
Also this version can be found in autoprepare branch of this repository
https://github.com/postgrespro/postgresql.builtin_pool.git
on github.

Thanks. I haven't looked at the code but this seems like interesting
work and I hope it will get some review. I guess this is bound to use
a lot of memory. I guess we'd eventually want to figure out how to
share the autoprepared plan cache between sessions, which is obviously
a whole can of worms.

A couple of trivial comments with my CF manager hat on:

1. Can you please fix the documentation? It doesn't build.
Obviously reviewing the goals, design and implementation are more
important than the documentation at this point, but if that is fixed
then the CF bot will be able to run check-world every day and we might
learn something about the code.
2. Accidental editor junk included: src/include/catalog/pg_proc.dat.~1~

Sorry, are you tests autoprepare-16.patch I have sent in the last e-mail?
I can not reproduce the problem with building documentation:

knizhnik@xps:~/postgresql/doc$ make
make -C ../src/backend generated-headers
make[1]: Entering directory '/home/knizhnik/postgresql/src/backend'
make -C catalog distprep generated-header-symlinks
make[2]: Entering directory '/home/knizhnik/postgresql/src/backend/catalog'
make[2]: Nothing to be done for 'distprep'.
make[2]: Nothing to be done for 'generated-header-symlinks'.
make[2]: Leaving directory '/home/knizhnik/postgresql/src/backend/catalog'
make -C utils distprep generated-header-symlinks
make[2]: Entering directory '/home/knizhnik/postgresql/src/backend/utils'
make[2]: Nothing to be done for 'distprep'.
make[2]: Nothing to be done for 'generated-header-symlinks'.
make[2]: Leaving directory '/home/knizhnik/postgresql/src/backend/utils'
make[1]: Leaving directory '/home/knizhnik/postgresql/src/backend'
make -C src all
make[1]: Entering directory '/home/knizhnik/postgresql/doc/src'
make -C sgml all
make[2]: Entering directory '/home/knizhnik/postgresql/doc/src/sgml'
/usr/bin/xmllint --path . --noout --valid postgres.sgml
/usr/bin/xsltproc --path . --stringparam pg.version '12devel'
stylesheet.xsl postgres.sgml
cp ./stylesheet.css html/
touch html-stamp
/usr/bin/xmllint --path . --noout --valid postgres.sgml
/usr/bin/xsltproc --path . --stringparam pg.version '12devel'
stylesheet-man.xsl postgres.sgml
touch man-stamp
make[2]: Leaving directory '/home/knizhnik/postgresql/doc/src/sgml'
make[1]: Leaving directory '/home/knizhnik/postgresql/doc/src'

Also autoporepare-16.patch doesn't include any junk

src/include/catalog/pg_proc.dat.~1~

#92Thomas Munro
thomas.munro@gmail.com
In reply to: Konstantin Knizhnik (#91)
Re: [HACKERS] Cached plans and statement generalization

On Tue, Jul 9, 2019 at 7:32 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Sorry, are you tests autoprepare-16.patch I have sent in the last e-mail?
I can not reproduce the problem with building documentation:

+ <term><varname>autoprepare_threshold</varname> (<type>integer/type>)

The problem is that "<type>integer/type>". (Missing "<"). There more
than one of those. Not sure why that doesn't fail for you.

Also autoporepare-16.patch doesn't include any junk

src/include/catalog/pg_proc.dat.~1~

Oops, yeah, sorry for the noise. I did something stupid in my apply script.

--
Thomas Munro
https://enterprisedb.com

#93Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Thomas Munro (#92)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 09.07.2019 15:16, Thomas Munro wrote:

On Tue, Jul 9, 2019 at 7:32 AM Konstantin Knizhnik
<k.knizhnik@postgrespro.ru> wrote:

Sorry, are you tests autoprepare-16.patch I have sent in the last e-mail?
I can not reproduce the problem with building documentation:

+ <term><varname>autoprepare_threshold</varname> (<type>integer/type>)

The problem is that "<type>integer/type>". (Missing "<"). There more
than one of those. Not sure why that doesn't fail for you.
or the noise. I did something stupid in my apply script.

Sorry, there were really several syntax problems.
I didn't understand why I have not noticed them before.
Fixed patch version of the path is attached.

Attachments:

autoprepare-17.patchtext/x-patch; name=autoprepare-17.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000000..d20a4d0098
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres tries to generalize executed statements
+    and build parameterized plan for them. Speed of execution of
+    autoprepared statements is almost the same as of explicitly
+    prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_threshold</varname>.
+    This variable specified minimal number of times the statement should be
+    executed before it is autoprepared. Please notice that, despite to the
+    value of this parameter, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    If number of different statements issued by application is large enough,
+    then autopreparing all of them can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). To prevent growth of backend's memory because of
+    autoprepared cache, it is possible to limit number of autoprepared statements
+    by setting <varname>autoprepare_limit</varname> GUC variable. LRU strategy will be used
+    to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals) and
+    query execution counter.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index f2b9d404cb..cb703f2084 100644
--- a/doc/src/sgml/catalogs.sgml
+++ b/doc/src/sgml/catalogs.sgml
@@ -8313,6 +8313,11 @@ SCRAM-SHA-256$<replaceable>&lt;iteration count&gt;</replaceable>:<replaceable>&l
       <entry>prepared statements</entry>
      </row>
 
+     <row>
+      <entry><link linkend="view-pg-autoprepared-statements"><structname>pg_autoprepared_statements</structname></link></entry>
+      <entry>autoprepared statements</entry>
+     </row>
+
      <row>
       <entry><link linkend="view-pg-prepared-xacts"><structname>pg_prepared_xacts</structname></link></entry>
       <entry>prepared transactions</entry>
@@ -9630,6 +9635,68 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx
   </para>
  </sect1>
 
+
+ <sect1 id="view-pg-autoprepared-statements">
+  <title><structname>pg_autoprepared_statements</structname></title>
+
+  <indexterm zone="view-pg-autoprepared-statements">
+   <primary>pg_autoprepared_statements</primary>
+  </indexterm>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view displays
+   all the autoprepared statements that are available in the current
+   session. See <xref linkend="autoprepare"/> for more information about autoprepared
+   statements.
+  </para>
+
+  <table>
+   <title><structname>pg_autoprepared_statements</structname> Columns</title>
+
+   <tgroup cols="3">
+    <thead>
+     <row>
+      <entry>Name</entry>
+      <entry>Type</entry>
+      <entry>Description</entry>
+     </row>
+    </thead>
+    <tbody>
+     <row>
+      <entry><structfield>statement</structfield></entry>
+      <entry><type>text</type></entry>
+      <entry>
+        The query string submitted by the client from which this prepared statement
+        was created. Please notice that literals in this statement are not
+        replaced with prepared statement placeholders.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>parameter_types</structfield></entry>
+      <entry><type>regtype[]</type></entry>
+      <entry>
+       The expected parameter types for the autoprepared statement in the
+       form of an array of <type>regtype</type>. The OID corresponding
+       to an element of this array can be obtained by casting the
+       <type>regtype</type> value to <type>oid</type>.
+      </entry>
+     </row>
+     <row>
+      <entry><structfield>exec_count</structfield></entry>
+      <entry><type>int8</type></entry>
+      <entry>
+        Number of times this statement was executed.
+      </entry>
+     </row>
+    </tbody>
+   </tgroup>
+  </table>
+
+  <para>
+   The <structname>pg_autoprepared_statements</structname> view is read only.
+  </para>
+ </sect1>
+
  <sect1 id="view-pg-prepared-xacts">
   <title><structname>pg_prepared_xacts</structname></title>
 
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 84341a30e5..8a9fff3756 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5326,6 +5326,53 @@ SELECT * FROM parent WHERE key = 2400;
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-autoprepare-threshold" xreflabel="autoprepare_threshold">
+      <term><varname>autoprepare_threshold</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>autoprepare_threshold</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Threshold for autopreparing query. The <varname>autoprepare_threshold</varname> parameter
+         specifies how much times the statement should be executed before generic plan for this statement is generated.
+         Zero value (default) disables autoprepare.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-limit" xreflabel="autoprepare_limit">
+      <term><varname>autoprepare_limit</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>autoprepare_limit</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Maximal number of autoprepared queries.
+         Zero value means unlimited number of autoprepared queries.
+         Too large number of prepared queries can cause backend memory overflow and slowdown execution speed
+         (because of increased lookup time). Default value is <literal>113</literal>.
+       </para>
+      </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-memory-limit" xreflabel="autoprepare_memory_limit">
+      <term><varname>autoprepare_memory_limit</varname> (<type>integer</type>)
+      <indexterm>
+       <primary><varname>autoprepare_memory_limit</varname> configuration parameter</primary>
+      </indexterm>
+      </term>
+      <listitem>
+       <para>
+         Maximal size of memory used by autoprepared queries.
+         Zero value (default) means that there is no memory limit.
+         Calculating memory used by prepared queries adds some extra overhead,
+		 so non-zero value of this parameter may cause some slowdown.
+         <varname>autoprepare_limit</varname> is much faster way to limit number of autoprepared statements.
+       </para>
+      </listitem>
+     </varlistentry>
      </variablelist>
     </sect2>
    </sect1>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 8960f11278..13f2f6d801 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1c76..ab9f2791a0 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index ea4c85e395..f5b9481d17 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -332,6 +332,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
     l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c278ee7318..5d25b2e41a 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 									const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -787,7 +786,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/nodes/nodeFuncs.c b/src/backend/nodes/nodeFuncs.c
index 05ae73f7db..176a2a703b 100644
--- a/src/backend/nodes/nodeFuncs.c
+++ b/src/backend/nodes/nodeFuncs.c
@@ -3752,6 +3752,454 @@ raw_expression_tree_walker(Node *node,
 	return false;
 }
 
+/*
+ * raw_expression_tree_mutator --- transform raw parse tree.
+ *
+ * This function is implementing slightly different approach for tree update than expression_tree_mutator().
+ * Callback is given pointer to pointer to the current node and can update this field instead of returning reference to new node.
+ * It makes it possible to remember changes and easily revert them without extra traversal of the tree.
+ *
+ * This function do not need QTW_DONT_COPY_QUERY flag: it never implicitly copy tree nodes, doing in-place update.
+ *
+ * Like raw_expression_tree_walker, there is no special rule about query
+ * boundaries: we descend to everything that's possibly interesting.
+ *
+ * Currently, the node type coverage here extends only to DML statements
+ * (SELECT/INSERT/UPDATE/DELETE) and nodes that can appear in them, because
+ * this is used mainly during analysis of CTEs, and only DML statements can
+ * appear in CTEs. If some other node is visited, iteration is immediately stopped and true is returned.
+ */
+bool
+raw_expression_tree_mutator(Node *node,
+							bool (*mutator) (),
+							void *context)
+{
+	ListCell   *temp;
+
+	/*
+	 * The walker has already visited the current node, and so we need only
+	 * recurse into any sub-nodes it has.
+	 */
+	if (node == NULL)
+		return false;
+
+	/* Guard against stack overflow due to overly complex expressions */
+	check_stack_depth();
+
+	switch (nodeTag(node))
+	{
+		case T_SetToDefault:
+		case T_CurrentOfExpr:
+		case T_Integer:
+		case T_Float:
+		case T_String:
+		case T_BitString:
+		case T_Null:
+		case T_ParamRef:
+		case T_A_Const:
+		case T_A_Star:
+			/* primitive node types with no subnodes */
+			break;
+		case T_Alias:
+			/* we assume the colnames list isn't interesting */
+			break;
+		case T_RangeVar:
+			return mutator(&((RangeVar *) node)->alias, context);
+		case T_GroupingFunc:
+			return mutator(&((GroupingFunc *) node)->args, context);
+		case T_SubLink:
+			{
+				SubLink	   *sublink = (SubLink *) node;
+
+				if (mutator(&sublink->testexpr, context))
+					return true;
+				/* we assume the operName is not interesting */
+				if (mutator(&sublink->subselect, context))
+					return true;
+			}
+			break;
+		case T_CaseExpr:
+			{
+				CaseExpr   *caseexpr = (CaseExpr *) node;
+
+				if (mutator(&caseexpr->arg, context))
+					return true;
+				/* we assume mutator(& doesn't care about CaseWhens, either */
+				foreach(temp, caseexpr->args)
+				{
+					CaseWhen   *when = (CaseWhen *) lfirst(temp);
+
+					Assert(IsA(when, CaseWhen));
+					if (mutator(&when->expr, context))
+						return true;
+					if (mutator(&when->result, context))
+						return true;
+				}
+				if (mutator(&caseexpr->defresult, context))
+					return true;
+			}
+			break;
+		case T_RowExpr:
+			/* Assume colnames isn't interesting */
+			return mutator(&((RowExpr *) node)->args, context);
+		case T_CoalesceExpr:
+			return mutator(&((CoalesceExpr *) node)->args, context);
+		case T_MinMaxExpr:
+			return mutator(&((MinMaxExpr *) node)->args, context);
+		case T_XmlExpr:
+			{
+				XmlExpr	   *xexpr = (XmlExpr *) node;
+
+				if (mutator(&xexpr->named_args, context))
+					return true;
+				/* we assume mutator(& doesn't care about arg_names */
+				if (mutator(&xexpr->args, context))
+					return true;
+			}
+			break;
+		case T_NullTest:
+			return mutator(&((NullTest *) node)->arg, context);
+		case T_BooleanTest:
+			return mutator(&((BooleanTest *) node)->arg, context);
+		case T_JoinExpr:
+			{
+				JoinExpr   *join = (JoinExpr *) node;
+
+				if (mutator(&join->larg, context))
+					return true;
+				if (mutator(&join->rarg, context))
+					return true;
+				if (mutator(&join->quals, context))
+					return true;
+				if (mutator(&join->alias, context))
+					return true;
+				/* using list is deemed uninteresting */
+			}
+			break;
+		case T_IntoClause:
+			{
+				IntoClause *into = (IntoClause *) node;
+
+				if (mutator(&into->rel, context))
+					return true;
+				/* colNames, options are deemed uninteresting */
+				/* viewQuery should be null in raw parsetree, but check it */
+				if (mutator(&into->viewQuery, context))
+					return true;
+			}
+			break;
+		case T_List:
+			foreach(temp, (List *) node)
+			{
+				if (mutator(&lfirst(temp), context))
+					return true;
+			}
+			break;
+		case T_InsertStmt:
+			{
+				InsertStmt *stmt = (InsertStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->cols, context))
+					return true;
+				if (mutator(&stmt->selectStmt, context))
+					return true;
+				if (mutator(&stmt->onConflictClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_DeleteStmt:
+			{
+				DeleteStmt *stmt = (DeleteStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->usingClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_UpdateStmt:
+			{
+				UpdateStmt *stmt = (UpdateStmt *) node;
+
+				if (mutator(&stmt->relation, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->returningList, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+			}
+			break;
+		case T_SelectStmt:
+			{
+				SelectStmt *stmt = (SelectStmt *) node;
+
+				if (mutator(&stmt->distinctClause, context))
+					return true;
+				if (mutator(&stmt->intoClause, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->fromClause, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+				if (mutator(&stmt->groupClause, context))
+					return true;
+				if (mutator(&stmt->havingClause, context))
+					return true;
+				if (mutator(&stmt->windowClause, context))
+					return true;
+				if (mutator(&stmt->valuesLists, context))
+					return true;
+				if (mutator(&stmt->sortClause, context))
+					return true;
+				if (mutator(&stmt->limitOffset, context))
+					return true;
+				if (mutator(&stmt->limitCount, context))
+					return true;
+				if (mutator(&stmt->lockingClause, context))
+					return true;
+				if (mutator(&stmt->withClause, context))
+					return true;
+				if (mutator(&stmt->larg, context))
+					return true;
+				if (mutator(&stmt->rarg, context))
+					return true;
+			}
+			break;
+		case T_A_Expr:
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+
+				if (mutator(&expr->lexpr, context))
+					return true;
+				if (mutator(&expr->rexpr, context))
+					return true;
+				/* operator name is deemed uninteresting */
+			}
+			break;
+		case T_BoolExpr:
+			{
+				BoolExpr   *expr = (BoolExpr *) node;
+
+				if (mutator(&expr->args, context))
+					return true;
+			}
+			break;
+		case T_ColumnRef:
+			/* we assume the fields contain nothing interesting */
+			break;
+		case T_FuncCall:
+			{
+				FuncCall   *fcall = (FuncCall *) node;
+
+				if (mutator(&fcall->args, context))
+					return true;
+				if (mutator(&fcall->agg_order, context))
+					return true;
+				if (mutator(&fcall->agg_filter, context))
+					return true;
+				if (mutator(&fcall->over, context))
+					return true;
+				/* function name is deemed uninteresting */
+			}
+			break;
+		case T_NamedArgExpr:
+			return mutator(&((NamedArgExpr *) node)->arg, context);
+		case T_A_Indices:
+			{
+				A_Indices  *indices = (A_Indices *) node;
+
+				if (mutator(&indices->lidx, context))
+					return true;
+				if (mutator(&indices->uidx, context))
+					return true;
+			}
+			break;
+		case T_A_Indirection:
+			{
+				A_Indirection *indir = (A_Indirection *) node;
+
+				if (mutator(&indir->arg, context))
+					return true;
+				if (mutator(&indir->indirection, context))
+					return true;
+			}
+			break;
+		case T_A_ArrayExpr:
+			return mutator(&((A_ArrayExpr *) node)->elements, context);
+		case T_ResTarget:
+			{
+				ResTarget  *rt = (ResTarget *) node;
+
+				if (mutator(&rt->indirection, context))
+					return true;
+				if (mutator(&rt->val, context))
+					return true;
+			}
+			break;
+		case T_MultiAssignRef:
+			return mutator(&((MultiAssignRef *) node)->source, context);
+		case T_TypeCast:
+			{
+				TypeCast   *tc = (TypeCast *) node;
+
+				if (mutator(&tc->arg, context))
+					return true;
+				if (mutator(&tc->typeName, context))
+					return true;
+			}
+			break;
+		case T_CollateClause:
+			return mutator(&((CollateClause *) node)->arg, context);
+		case T_SortBy:
+			return mutator(&((SortBy *) node)->node, context);
+		case T_WindowDef:
+			{
+				WindowDef  *wd = (WindowDef *) node;
+
+				if (mutator(&wd->partitionClause, context))
+					return true;
+				if (mutator(&wd->orderClause, context))
+					return true;
+				if (mutator(&wd->startOffset, context))
+					return true;
+				if (mutator(&wd->endOffset, context))
+					return true;
+			}
+			break;
+		case T_RangeSubselect:
+			{
+				RangeSubselect *rs = (RangeSubselect *) node;
+
+				if (mutator(&rs->subquery, context))
+					return true;
+				if (mutator(&rs->alias, context))
+					return true;
+			}
+			break;
+		case T_RangeFunction:
+			{
+				RangeFunction *rf = (RangeFunction *) node;
+
+				if (mutator(&rf->functions, context))
+					return true;
+				if (mutator(&rf->alias, context))
+					return true;
+				if (mutator(&rf->coldeflist, context))
+					return true;
+			}
+			break;
+		case T_RangeTableSample:
+			{
+				RangeTableSample *rts = (RangeTableSample *) node;
+
+				if (mutator(&rts->relation, context))
+					return true;
+				/* method name is deemed uninteresting */
+				if (mutator(&rts->args, context))
+					return true;
+				if (mutator(&rts->repeatable, context))
+					return true;
+			}
+			break;
+		case T_TypeName:
+			{
+				TypeName   *tn = (TypeName *) node;
+
+				if (mutator(&tn->typmods, context))
+					return true;
+				if (mutator(&tn->arrayBounds, context))
+					return true;
+				/* type name itself is deemed uninteresting */
+			}
+			break;
+		case T_ColumnDef:
+			{
+				ColumnDef  *coldef = (ColumnDef *) node;
+
+				if (mutator(&coldef->typeName, context))
+					return true;
+				if (mutator(&coldef->raw_default, context))
+					return true;
+				if (mutator(&coldef->collClause, context))
+					return true;
+				/* for now, constraints are ignored */
+			}
+			break;
+		case T_IndexElem:
+			{
+				IndexElem  *indelem = (IndexElem *) node;
+
+				if (mutator(&indelem->expr, context))
+					return true;
+				/* collation and opclass names are deemed uninteresting */
+			}
+			break;
+		case T_GroupingSet:
+			return mutator(&((GroupingSet *) node)->content, context);
+		case T_LockingClause:
+			return mutator(&((LockingClause *) node)->lockedRels, context);
+		case T_XmlSerialize:
+			{
+				XmlSerialize *xs = (XmlSerialize *) node;
+
+				if (mutator(&xs->expr, context))
+					return true;
+				if (mutator(&xs->typeName, context))
+					return true;
+			}
+			break;
+		case T_WithClause:
+			return mutator(&((WithClause *) node)->ctes, context);
+		case T_InferClause:
+			{
+				InferClause *stmt = (InferClause *) node;
+
+				if (mutator(&stmt->indexElems, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_OnConflictClause:
+			{
+				OnConflictClause *stmt = (OnConflictClause *) node;
+
+				if (mutator(&stmt->infer, context))
+					return true;
+				if (mutator(&stmt->targetList, context))
+					return true;
+				if (mutator(&stmt->whereClause, context))
+					return true;
+			}
+			break;
+		case T_CommonTableExpr:
+			return mutator(&((CommonTableExpr *) node)->ctequery, context);
+		default:
+			return true;
+	}
+	return false;
+}
+
 /*
  * planstate_tree_walker --- walk plan state trees
  *
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 44a59e1d4f..2c760bd48c 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -41,6 +41,7 @@
 #include "access/xact.h"
 #include "catalog/pg_type.h"
 #include "commands/async.h"
+#include "commands/defrem.h"
 #include "commands/prepare.h"
 #include "executor/spi.h"
 #include "jit/jit.h"
@@ -48,6 +49,7 @@
 #include "libpq/pqformat.h"
 #include "libpq/pqsignal.h"
 #include "miscadmin.h"
+#include "nodes/nodeFuncs.h"
 #include "nodes/print.h"
 #include "optimizer/optimizer.h"
 #include "pgstat.h"
@@ -71,12 +73,15 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
 #include "utils/snapmgr.h"
 #include "utils/timeout.h"
 #include "utils/timestamp.h"
+#include "utils/int8.h"
 #include "mb/pg_wchar.h"
 
 
@@ -193,10 +198,11 @@ static bool IsTransactionExitStmtList(List *pstmts);
 static bool IsTransactionStmtList(List *pstmts);
 static void drop_unnamed_stmt(void);
 static void log_disconnections(int code, Datum arg);
+static bool exec_cached_query(const char* query, List *parsetree_list);
+static void exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest);
 static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
-
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -1061,6 +1067,16 @@ exec_simple_query(const char *query_string)
 	 */
 	use_implicit_block = (list_length(parsetree_list) > 1);
 
+	/*
+	 * Try to find cached plan.
+	 */
+	if (autoprepare_threshold != 0
+		&& list_length(parsetree_list) == 1 /* we can prepare only single statement commands */
+		&& exec_cached_query(query_string, parsetree_list))
+	{
+		return;
+	}
+
 	/*
 	 * Run through the raw parsetree(s) and process each one.
 	 */
@@ -1945,9 +1961,28 @@ exec_bind_message(StringInfo input_message)
 static void
 exec_execute_message(const char *portal_name, long max_rows)
 {
-	CommandDest dest;
+	Portal portal = GetPortalByName(portal_name);
+	CommandDest dest = whereToSendOutput;
+
+	/* Adjust destination to tell printtup.c what to do */
+	if (dest == DestRemote)
+		dest = DestRemoteExecute;
+
+	if (!PortalIsValid(portal))
+		ereport(ERROR,
+				(errcode(ERRCODE_UNDEFINED_CURSOR),
+				 errmsg("portal \"%s\" does not exist", portal_name)));
+
+	exec_prepared_plan(portal, portal_name, max_rows, dest);
+}
+
+/*
+ * Execute prepared plan.
+ */
+static void
+exec_prepared_plan(Portal portal, const char *portal_name, long max_rows, CommandDest dest)
+{
 	DestReceiver *receiver;
-	Portal		portal;
 	bool		completed;
 	char		completionTag[COMPLETION_TAG_BUFSIZE];
 	const char *sourceText;
@@ -1959,17 +1994,6 @@ exec_execute_message(const char *portal_name, long max_rows)
 	bool		was_logged = false;
 	char		msec_str[32];
 
-	/* Adjust destination to tell printtup.c what to do */
-	dest = whereToSendOutput;
-	if (dest == DestRemote)
-		dest = DestRemoteExecute;
-
-	portal = GetPortalByName(portal_name);
-	if (!PortalIsValid(portal))
-		ereport(ERROR,
-				(errcode(ERRCODE_UNDEFINED_CURSOR),
-				 errmsg("portal \"%s\" does not exist", portal_name)));
-
 	/*
 	 * If the original query was a null string, just return
 	 * EmptyQueryResponse.
@@ -2016,7 +2040,8 @@ exec_execute_message(const char *portal_name, long max_rows)
 	/*
 	 * Report query to various monitoring facilities.
 	 */
-	debug_query_string = sourceText;
+	if (!debug_query_string)
+		debug_query_string = sourceText;
 
 	pgstat_report_activity(STATE_RUNNING, sourceText);
 
@@ -2032,7 +2057,7 @@ exec_execute_message(const char *portal_name, long max_rows)
 	 * context, because that may get deleted if portal contains VACUUM).
 	 */
 	receiver = CreateDestReceiver(dest);
-	if (dest == DestRemoteExecute)
+	if (dest == DestRemoteExecute || dest == DestRemote)
 		SetRemoteDestReceiverParams(receiver, portal);
 
 	/*
@@ -4679,6 +4704,836 @@ log_disconnections(int code, Datum arg)
 					port->remote_port[0] ? " port=" : "", port->remote_port)));
 }
 
+/*
+ * Autoprepare implementation.
+ * Autoprepare consists of raw parse tree mutator, hash table of cached plans and exec_cached_query function
+ * which combines exec_parse_message + exec_bind_message + exec_execute_message
+ */
+
+/*
+ * Mapping between parameters and replaced literals
+ */
+typedef struct ParamBinding
+{
+	A_Const*	 literal; /* Original literal */
+	ParamRef*	 paramref;/* Constructed parameter reference */
+	Param*       param;   /* Constructed parameter */
+	Node**		 ref;	  /* Pointer to pointer to literal node (used to revert raw parse tree update) */
+	Oid			 raw_type;/* Parameter raw type */
+	Oid			 type;	  /* Parameter type after analysis */
+	struct ParamBinding* next; /* L1-list of query parameter bindings */
+} ParamBinding;
+
+/*
+ * Plan cache entry
+ */
+typedef struct
+{
+	Node*			  parse_tree; /* tree is used as hash key */
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+	CachedPlanSource* plan;
+	uint32			  hash;		  /* hash calculated for this parsed tree */
+	Oid*			  param_types;/* types of parameters */
+	int				  n_params;	  /* number of parameters extracted for this query */
+	int16			  format;	  /* portal output format */
+	bool			  disable_autoprepare; /* disable preparing of this query */
+	MemoryContext     context;    /* memory context used for parse tree of this cache entry */
+} plan_cache_entry;
+
+static uint32 plan_cache_hash_fn(const void *key, Size keysize)
+{
+	return ((plan_cache_entry*)key)->hash;
+}
+
+static int plan_cache_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	plan_cache_entry* e1 = (plan_cache_entry*)key1;
+	plan_cache_entry* e2 = (plan_cache_entry*)key2;
+
+	return equal(e1->parse_tree, e2->parse_tree)
+		&& memcmp(e1->param_types, e2->param_types, sizeof(Oid)*e1->n_params) == 0 ? 0 : 1;
+}
+
+#define PLAN_CACHE_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+size_t autoprepare_hits;
+size_t autoprepare_misses;
+size_t autoprepare_cached_plans;
+size_t autoprepare_used_memory;
+
+
+/*
+ * Context for raw_expression_tree_mutator
+ */
+typedef struct {
+	int			 n_params; /* Number of extracted parameters */
+	uint32		 hash;	   /* We calculate hash for parse tree during plan traversal */
+	ParamBinding** param_list_tail; /* pointer to last element "next" field address, used to construct L1 list of parameters */
+} GeneralizerCtx;
+
+
+static HTAB*	  plan_cache_hash; /* hash table for plan cache */
+static dlist_head plan_cache_lru;		  /* LRU L2-list for cached queries */
+static MemoryContext plan_cache_context; /* memory context used for plan cache */
+
+/*
+ * Check if expression is constant (used to eliminate substitution of literals with parameters in such expressions
+ */
+static bool is_constant_expression(Node* node)
+{
+	return node != NULL
+		&& (IsA(node, A_Const)
+			|| (IsA(node, A_Expr)
+				&& is_constant_expression(((A_Expr*)node)->lexpr)
+				&& is_constant_expression(((A_Expr*)node)->rexpr)));
+}
+
+/*
+ * Infer type of literal expression. Null literals should not be replaced with parameters.
+ */
+static Oid get_literal_type(Value* val)
+{
+	int64		val64;
+	switch (val->type)
+	{
+	  case T_Integer:
+		return INT4OID;
+	  case T_Float:
+		/* could be an oversize integer as well as a float ... */
+		if (scanint8(strVal(val), true, &val64))
+		{
+			/*
+			 * It might actually fit in int32. Probably only INT_MIN can
+			 * occur, but we'll code the test generally just to be sure.
+			 */
+			int32		val32 = (int32) val64;
+			return (val64 == (int64)val32) ? INT4OID : INT8OID;
+		}
+		else
+		{
+			return NUMERICOID;
+		}
+	  case T_BitString:
+		return BITOID;
+	  case T_String:
+		return UNKNOWNOID;
+	  default:
+		Assert(false);
+		return InvalidOid;
+	}
+}
+
+static Datum get_param_value(Oid type, Value* val)
+{
+	if (val->type == T_Integer && type == INT4OID)
+	{
+		/*
+		 * Integer constant
+		 */
+		return Int32GetDatum((int32)val->val.ival);
+	}
+	else
+	{
+		/*
+		 * Convert from string literal
+		 */
+		Oid	 typinput;
+		Oid	 typioparam;
+
+		getTypeInputInfo(type, &typinput, &typioparam);
+		return OidInputFunctionCall(typinput, val->val.str, typioparam, -1);
+	}
+}
+
+
+/*
+ * Callback for raw_expression_tree_mutator performing substitution of literals with parameters
+ */
+static bool
+raw_parse_tree_generalizer(Node** ref, void *context)
+{
+	Node* node = *ref;
+	GeneralizerCtx* ctx = (GeneralizerCtx*)context;
+	if (node == NULL)
+	{
+		return false;
+	}
+	/*
+	 * Calculate hash for parse tree. We consider only node tags here, precise comparison of trees is done using equal() function.
+	 * Here we calculate hash for original (unpatched) tree, without ParamRef nodes.
+	 * It is non principle, because hash calculation doesn't take in account types and values of Const nodes. So the same generalized queries
+	 * will have the same hash value. There are about 1000 different nodes tags, this is why we rotate hash on 10 bits.
+	 */
+	ctx->hash = (ctx->hash << 10) ^ (ctx->hash >> 22) ^ nodeTag(node);
+
+	switch (nodeTag(node))
+	{
+		case T_A_Expr:
+		{
+			/*
+			 * Do not perform substitution of literals in constant expression (which is likely to be the same for all queries and optimized by compiler)
+			 */
+			if (!is_constant_expression(node))
+			{
+				A_Expr	   *expr = (A_Expr *) node;
+				if (raw_parse_tree_generalizer((Node**)&expr->lexpr, context))
+					return true;
+				if (raw_parse_tree_generalizer((Node**)&expr->rexpr, context))
+					return true;
+			}
+			break;
+		}
+		case T_A_Const:
+		{
+			/*
+			 * Do substitution of literals with parameters here
+			 */
+			A_Const* literal = (A_Const*)node;
+			if (literal->val.type != T_Null)
+			{
+				/*
+				 * Do not substitute null literals with parameters
+				 */
+				ParamBinding* cp = palloc0(sizeof(ParamBinding));
+				ParamRef* param = makeNode(ParamRef);
+				param->number = ++ctx->n_params;
+				param->location = literal->location;
+				cp->ref = ref;
+				cp->paramref = param;
+				cp->literal = literal;
+				cp->raw_type = get_literal_type(&literal->val);
+				*ctx->param_list_tail = cp;
+				ctx->param_list_tail = &cp->next;
+				*ref = (Node*)param;
+			}
+			break;
+		}
+	  case T_SelectStmt:
+	  {
+		  /*
+		   * Substitute literals only in target list, WHERE, VALUES and WITH clause,
+		   * skipping target and from lists, which is unlikely contains some parameterized values
+		   */
+		  SelectStmt *stmt = (SelectStmt *) node;
+		  if (stmt->intoClause)
+			  return true; /* Utility statement can not be prepared */
+		  if (raw_parse_tree_generalizer((Node**)&stmt->targetList, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->whereClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->valuesLists, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->withClause, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->larg, context))
+			  return true;
+		  if (raw_parse_tree_generalizer((Node**)&stmt->rarg, context))
+			  return true;
+		  break;
+	  }
+	  case T_TypeName:
+	  case T_SortGroupClause:
+	  case T_SortBy:
+	  case T_A_ArrayExpr:
+	  case T_TypeCast:
+		/*
+		 * Literals in this clauses should not be replaced with parameters
+		 */
+		break;
+	  default:
+		/*
+		 * Default traversal. raw_expression_tree_mutator returns true for all not recognized nodes, for example right now
+		 * all transaction control statements are not covered by raw_expression_tree_mutator and so will not auto prepared.
+		 * My experiments show that effect of non-preparing start/commit transaction statements is positive.
+		 */
+		return raw_expression_tree_mutator(node, raw_parse_tree_generalizer, context);
+	}
+	return false;
+}
+
+static Node*
+parse_tree_generalizer(Node *node, void *context)
+{
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list = (ParamBinding*)context;
+	if (node == NULL)
+	{
+		return NULL;
+	}
+	if (IsA(node, Query))
+	{
+		return (Node*)query_tree_mutator((Query*)node,
+										 parse_tree_generalizer,
+										 context,
+										 QTW_DONT_COPY_QUERY);
+	}
+	if (IsA(node, Const))
+	{
+		Const* c = (Const*)node;
+		int paramno = 1;
+		for (binding = binding_list; binding != NULL && binding->literal->location != c->location; binding = binding->next, paramno++);
+		if (binding != NULL)
+		{
+			if (binding->param != NULL)
+			{
+				/* Parameter can be used only once */
+				binding->type = UNKNOWNOID;
+				//return (Node*)binding->param;
+			}
+			else
+			{
+				Param* param = makeNode(Param);
+				param->paramkind = PARAM_EXTERN;
+				param->paramid = paramno;
+				param->paramtype = c->consttype;
+				param->paramtypmod = c->consttypmod;
+				param->paramcollid = c->constcollid;
+				param->location = c->location;
+				binding->type = c->consttype;
+				binding->param = param;
+				return (Node*)param;
+			}
+		}
+		return node;
+	}
+	return expression_tree_mutator(node, parse_tree_generalizer, context);
+}
+
+/*
+ * Restore original parse tree, replacing all ParamRef back with Const nodes.
+ * Such undo operation seems to be more efficient than copying the whole parse tree by raw_expression_tree_mutator
+ */
+static void undo_query_plan_changes(ParamBinding* cp)
+{
+	while (cp != NULL) {
+		*cp->ref = (Node*)cp->literal;
+		cp = cp->next;
+	}
+}
+
+/*
+ * Location of converted literal in query.
+ * Used for precise error reporting (line number)
+ */
+static int param_location;
+
+/*
+ * Error callback adding information about error location
+ */
+static void
+prepare_error_callback(void *arg)
+{
+	CachedPlanSource *psrc = (CachedPlanSource*)arg;
+	/* And pass it to the ereport mechanism */
+	if (geterrcode() != ERRCODE_QUERY_CANCELED) {
+		int pos = pg_mbstrlen_with_len(psrc->query_string, param_location) + 1;
+		(void)errposition(pos);
+	}
+}
+
+
+static Size
+get_memory_context_size(MemoryContext ctx)
+{
+	MemoryContextCounters counters = {0};
+	ctx->methods->stats(ctx, NULL, NULL, &counters);
+	return counters.totalspace;
+}
+
+/*
+ * Try to generalize query, find cached plan for it and execute
+ */
+static bool
+exec_cached_query(const char *query_string, List *parsetree_list)
+{
+	int				  n_params;
+	plan_cache_entry *entry;
+	bool			  found;
+	MemoryContext	  old_context;
+	CachedPlanSource *psrc;
+	ParamListInfo	  params;
+	int				  paramno;
+	CachedPlan		 *cplan;
+	Portal			  portal;
+	bool			  snapshot_set = false;
+	GeneralizerCtx	  ctx;
+	ParamBinding*	  binding;
+	ParamBinding*	  binding_list;
+	plan_cache_entry  pattern;
+	Oid*			  param_types;
+	RawStmt			 *raw_parse_tree;
+	bool              use_existed_plan;
+
+	raw_parse_tree = castNode(RawStmt, linitial(parsetree_list));
+
+	/*
+	 * Substitute literals with parameters and calculate hash for raw parse tree
+	 */
+	ctx.param_list_tail = &binding_list;
+	ctx.n_params = 0;
+	ctx.hash = 0;
+	if (raw_parse_tree_generalizer(&raw_parse_tree->stmt, &ctx))
+	{
+		*ctx.param_list_tail = NULL;
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+	*ctx.param_list_tail = NULL;
+	n_params = ctx.n_params;
+
+	/*
+	 * Extract array of parameters types: it is needed for cached plan lookup
+	 */
+	param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+	for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+	{
+		param_types[paramno] = binding->raw_type;
+	}
+
+	/*
+	 * Construct plan cache context if not constructed yet.
+	 */
+	if (plan_cache_context == NULL)
+	{
+		plan_cache_context = AllocSetContextCreate(TopMemoryContext,
+												   "plan cache context",
+												   ALLOCSET_DEFAULT_SIZES);
+	}
+	/* Manipulations with hash table are performed in plan_cache_context memory context */
+	old_context = MemoryContextSwitchTo(plan_cache_context);
+
+	/*
+	 * Initialize hash table if not initialized yet
+	 */
+	if (plan_cache_hash == NULL)
+	{
+		static HASHCTL info;
+		info.keysize = sizeof(plan_cache_entry);
+		info.entrysize = sizeof(plan_cache_entry);
+		info.hash = plan_cache_hash_fn;
+		info.match = plan_cache_match_fn;
+		plan_cache_hash = hash_create("plan_cache", autoprepare_limit != 0 ? autoprepare_limit : PLAN_CACHE_SIZE,
+									  &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+		dlist_init(&plan_cache_lru);
+	}
+
+	/*
+	 * Lookup generalized query
+	 */
+	pattern.parse_tree = raw_parse_tree->stmt;
+	pattern.hash = ctx.hash;
+	pattern.n_params = n_params;
+	pattern.param_types = param_types;
+	entry = (plan_cache_entry*)hash_search(plan_cache_hash, &pattern, HASH_ENTER, &found);
+	if (!found)
+	{
+		/* Check number of cached queries */
+		autoprepare_cached_plans += 1;
+		while ((autoprepare_cached_plans > autoprepare_limit && autoprepare_limit != 0)
+			|| (autoprepare_memory_limit && autoprepare_used_memory > (size_t)autoprepare_memory_limit*1024))
+		{
+			/* Drop least recently accessed query */
+			plan_cache_entry* victim = dlist_container(plan_cache_entry, lru, plan_cache_lru.head.prev);
+			MemoryContext victim_context = victim->context;
+			dlist_delete(&victim->lru);
+			if (victim->plan)
+			{
+				if (autoprepare_memory_limit)
+					autoprepare_used_memory -= get_memory_context_size(victim->plan->context);
+				DropCachedPlan(victim->plan);
+			}
+			hash_search(plan_cache_hash, victim, HASH_REMOVE, NULL);
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(victim_context);
+			MemoryContextDelete(victim_context);
+			autoprepare_cached_plans -= 1;
+		}
+		entry->exec_count = 0;
+		entry->plan = NULL;
+		entry->disable_autoprepare = false;
+		entry->context = AllocSetContextCreate(plan_cache_context,
+											   "AutopreparedStatementConext",
+											   ALLOCSET_START_SMALL_SIZES);
+		MemoryContextSwitchTo(entry->context);
+		entry->parse_tree = copyObject(pattern.parse_tree);
+		entry->param_types = palloc(n_params*sizeof(Oid));
+		memcpy(entry->param_types, pattern.param_types, n_params*sizeof(Oid));
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(entry->context);
+	}
+	else
+	{
+		dlist_delete(&entry->lru); /* accessed entry will be moved to the head of LRU list */
+		if (entry->plan != NULL && !entry->plan->is_valid)
+		{
+			/* Drop invalidated plan: it will be reconstructed later */
+			if (autoprepare_memory_limit)
+				autoprepare_used_memory -= get_memory_context_size(entry->plan->context);
+			DropCachedPlan(entry->plan);
+			entry->plan = NULL;
+		}
+	}
+	dlist_insert_after(&plan_cache_lru.head, &entry->lru); /* prepend entry to the head of LRU list */
+	MemoryContextSwitchTo(old_context); /* Done with plan_cache_context memory context */
+
+
+	/*
+	 * Prepare query only when it is executed not less than autoprepare_threshold times
+	 */
+	if (entry->disable_autoprepare || ++entry->exec_count < autoprepare_threshold)
+	{
+		undo_query_plan_changes(binding_list);
+		autoprepare_misses += 1;
+		return false;
+	}
+
+	if (entry->plan == NULL)
+	{
+		bool		snapshot_set = false;
+		const char *commandTag;
+		List	   *querytree_list;
+
+		/*
+		 * Switch to appropriate context for preparing plan.
+		 */
+		old_context = MemoryContextSwitchTo(MessageContext);
+
+		/*
+		 * Get the command name for use in status display (it also becomes the
+		 * default completion tag, down inside PortalRun).  Set ps_status and
+		 * do any special start-of-SQL-command processing needed by the
+		 * destination.
+		 */
+		commandTag = CreateCommandTag(raw_parse_tree->stmt);
+
+		/*
+		 * If we are in an aborted transaction, reject all commands except
+		 * COMMIT/ABORT.  It is important that this test occur before we try
+		 * to do parse analysis, rewrite, or planning, since all those phases
+		 * try to do database accesses, which may fail in abort state. (It
+		 * might be safe to allow some additional utility commands in this
+		 * state, but not many...)
+		 */
+		if (IsAbortedTransactionBlockState() &&
+			!IsTransactionExitStmt(raw_parse_tree->stmt))
+			ereport(ERROR,
+					(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+					 errmsg("current transaction is aborted, "
+						  "commands ignored until end of transaction block"),
+					 errdetail_abort()));
+
+		/*
+		 * Create the CachedPlanSource before we do parse analysis, since it
+		 * needs to see the unmodified raw parse tree.
+		 */
+		psrc = CreateCachedPlan(raw_parse_tree, query_string, commandTag);
+
+		/*
+		 * Revert raw plan to use literals
+		 */
+		undo_query_plan_changes(binding_list);
+
+		/*
+		 * Set up a snapshot if parse analysis/planning will need one.
+		 */
+		if (analyze_requires_snapshot(raw_parse_tree))
+		{
+			PushActiveSnapshot(GetTransactionSnapshot());
+			snapshot_set = true;
+		}
+
+		/*
+		 * Avoid modificatoin of original parse tree
+		 */
+		MemoryContextSwitchTo(old_context);
+		raw_parse_tree = copyObject(raw_parse_tree);
+		MemoryContextSwitchTo(MessageContext);
+
+		querytree_list = pg_analyze_and_rewrite(raw_parse_tree, query_string,
+												NULL, 0, NULL);
+		if (querytree_list->length != 1)
+		{
+			if (snapshot_set)
+				PopActiveSnapshot();
+			entry->disable_autoprepare = true;
+			autoprepare_misses += 1;
+			MemoryContextSwitchTo(old_context);
+			return false;
+		}
+		/*
+		 * Replace Const with Param nodes
+		 */
+		(void)query_tree_mutator((Query*)linitial(querytree_list),
+								 parse_tree_generalizer,
+								 binding_list,
+								 QTW_DONT_COPY_QUERY);
+
+		/* Done with the snapshot used for parsing/planning */
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		param_types = (Oid*)palloc(sizeof(Oid)*n_params);
+		psrc->param_types = param_types;
+		for (paramno = 0, binding = binding_list; paramno < n_params; paramno++, binding = binding->next)
+		{
+			if (binding->param == NULL || binding->type == UNKNOWNOID)
+			{
+				/* Failed to resolve parameter type */
+				entry->disable_autoprepare = true;
+				autoprepare_misses += 1;
+				MemoryContextSwitchTo(old_context);
+				return false;
+			}
+			param_types[paramno] = binding->type;
+		}
+
+		/* Finish filling in the CachedPlanSource */
+		CompleteCachedPlan(psrc,
+						   querytree_list,
+						   NULL,
+						   param_types,
+						   n_params,
+						   NULL,
+						   NULL,
+						   CURSOR_OPT_PARALLEL_OK,	/* allow parallel mode */
+						   true);	/* fixed result */
+
+		/* If we got a cancel signal during analysis, quit */
+		CHECK_FOR_INTERRUPTS();
+
+		SaveCachedPlan(psrc);
+		if (autoprepare_memory_limit)
+			autoprepare_used_memory += get_memory_context_size(psrc->context);
+
+		/*
+		 * We do NOT close the open transaction command here; that only happens
+		 * when the client sends Sync.  Instead, do CommandCounterIncrement just
+		 * in case something happened during parse/plan.
+		 */
+		CommandCounterIncrement();
+
+		MemoryContextSwitchTo(old_context); /* Done with MessageContext memory context */
+
+		entry->plan = psrc;
+		use_existed_plan = false;
+
+		/*
+		 * Determine output format
+		 */
+		entry->format = 0;				/* TEXT is default */
+		if (IsA(raw_parse_tree->stmt, FetchStmt))
+		{
+			FetchStmt  *stmt = (FetchStmt *)raw_parse_tree->stmt;
+
+			if (!stmt->ismove)
+			{
+				Portal		fportal = GetPortalByName(stmt->portalname);
+
+				if (PortalIsValid(fportal) &&
+					(fportal->cursorOptions & CURSOR_OPT_BINARY))
+					entry->format = 1; /* BINARY */
+			}
+		}
+	}
+	else
+	{
+		/* Plan found */
+		psrc = entry->plan;
+		use_existed_plan = true;
+		Assert(n_params == entry->n_params);
+	}
+
+	/*
+	 * If we are in aborted transaction state, the only portals we can
+	 * actually run are those containing COMMIT or ROLLBACK commands. We
+	 * disallow binding anything else to avoid problems with infrastructure
+	 * that expects to run inside a valid transaction.	We also disallow
+	 * binding any parameters, since we can't risk calling user-defined I/O
+	 * functions.
+	 */
+	if (IsAbortedTransactionBlockState() &&
+		(!IsTransactionExitStmt(psrc->raw_parse_tree->stmt) ||
+		 n_params != 0))
+		ereport(ERROR,
+				(errcode(ERRCODE_IN_FAILED_SQL_TRANSACTION),
+				 errmsg("current transaction is aborted, "
+						"commands ignored until end of transaction block"),
+				 errdetail_abort()));
+
+	/*
+	 * Create unnamed portal to run the query or queries in. If there
+	 * already is one, silently drop it.
+	 */
+	portal = CreatePortal("", true, true);
+	/* Don't display the portal in pg_cursors */
+	portal->visible = false;
+
+	/*
+	 * Prepare to copy stuff into the portal's memory context.	We do all this
+	 * copying first, because it could possibly fail (out-of-memory) and we
+	 * don't want a failure to occur between GetCachedPlan and
+	 * PortalDefineQuery; that would result in leaking our plancache refcount.
+	 */
+	old_context = MemoryContextSwitchTo(portal->portalContext);
+
+	/* Copy the plan's query string into the portal */
+	query_string = pstrdup(psrc->query_string);
+
+	/*
+	 * Set a snapshot if we have parameters to fetch (since the input
+	 * functions might need it) or the query isn't a utility command (and
+	 * hence could require redoing parse analysis and planning).  We keep the
+	 * snapshot active till we're done, so that plancache.c doesn't have to
+	 * take new ones.
+	 */
+	if (n_params > 0 ||
+		(psrc->raw_parse_tree &&
+		 analyze_requires_snapshot(psrc->raw_parse_tree)))
+	{
+		PushActiveSnapshot(GetTransactionSnapshot());
+		snapshot_set = true;
+	}
+
+	/*
+	 * Fetch parameters, if any, and store in the portal's memory context.
+	 */
+	if (n_params > 0)
+	{
+		ErrorContextCallback errcallback;
+
+		params = (ParamListInfo) palloc0(offsetof(ParamListInfoData, params) +
+										 n_params * sizeof(ParamExternData));
+		params->numParams = n_params;
+
+		/*
+		 * Register error callback to precisely report error in case of conversion error while storig parameter value.
+		 */
+		errcallback.callback = prepare_error_callback;
+		errcallback.arg = (void *) psrc;
+		errcallback.previous = error_context_stack;
+		error_context_stack = &errcallback;
+
+		for (paramno = 0, binding = binding_list;
+			 paramno < n_params;
+			 paramno++, binding = binding->next)
+		{
+			Oid	ptype = psrc->param_types[paramno];
+
+			param_location = binding->literal->location;
+
+			params->params[paramno].isnull = false;
+			params->params[paramno].value = get_param_value(ptype, &binding->literal->val);
+			/*
+			 * We mark the params as CONST.	 This ensures that any custom plan
+			 * makes full use of the parameter values.
+			 */
+			params->params[paramno].pflags = PARAM_FLAG_CONST;
+			params->params[paramno].ptype = ptype;
+		}
+		error_context_stack = errcallback.previous;
+	}
+	else
+	{
+		params = NULL;
+	}
+
+	/* Done storing stuff in portal's context */
+	MemoryContextSwitchTo(old_context);
+
+	/*
+	 * Obtain a plan from the CachedPlanSource.	 Any cruft from (re)planning
+	 * will be generated in MessageContext. The plan refcount will be
+	 * assigned to the Portal, so it will be released at portal destruction.
+	 */
+	PG_TRY();
+	{
+		cplan = GetCachedPlan(psrc, params, false, NULL);
+	}
+	PG_CATCH();
+	{
+		/*
+		 * In case of planner errors revert back to original query processing
+		 * and disable autoprepare for this query to avoid such problems in future.
+		 */
+		FlushErrorState();
+
+		if (snapshot_set)
+			PopActiveSnapshot();
+
+		if (use_existed_plan)
+			undo_query_plan_changes(binding_list);
+
+		entry->disable_autoprepare = true;
+		autoprepare_misses += 1;
+		PortalDrop(portal, false);
+		return false;
+	}
+	PG_END_TRY();
+
+	/*
+	 * Now we can define the portal.
+	 *
+	 * DO NOT put any code that could possibly throw an error between the
+	 * above GetCachedPlan call and here.
+	 */
+	PortalDefineQuery(portal,
+					  NULL,
+					  query_string,
+					  psrc->commandTag,
+					  cplan->stmt_list,
+					  cplan);
+
+	/* Done with the snapshot used for parameter I/O and parsing/planning */
+	if (snapshot_set)
+	{
+		PopActiveSnapshot();
+	}
+
+	/*
+	 * And we're ready to start portal execution.
+	 */
+	PortalStart(portal, params, 0, InvalidSnapshot);
+
+	/*
+	 * Apply the result format requests to the portal.
+	 */
+	PortalSetResultFormat(portal, 1, &entry->format);
+
+	/*
+	 * Finally execute prepared statement
+	 */
+	exec_prepared_plan(portal, "", FETCH_ALL, whereToSendOutput);
+
+	/*
+	 * Close down transaction statement, if one is open.
+	 */
+	finish_xact_command();
+
+	autoprepare_hits += 1;
+
+	return true;
+}
+
+
+void ResetAutoprepareCache(void)
+{
+	if (plan_cache_hash != NULL)
+	{
+		hash_destroy(plan_cache_hash);
+		MemoryContextReset(plan_cache_context);
+		dlist_init(&plan_cache_lru);
+		autoprepare_cached_plans = 0;
+		autoprepare_used_memory = 0;
+		plan_cache_hash = 0;
+	}
+}
+
+
 /*
  * Start statement timeout timer, if enabled.
  *
@@ -4717,3 +5572,89 @@ disable_statement_timeout(void)
 		stmt_timeout_active = false;
 	}
 }
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(3);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (plan_cache_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		plan_cache_entry *prep_stmt;
+
+		hash_seq_init(&hash_seq, plan_cache_hash);
+		while ((prep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (prep_stmt->plan != NULL && !prep_stmt->disable_autoprepare)
+			{
+				Datum		values[3];
+				bool		nulls[3];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(prep_stmt->plan->query_string);
+				values[1] = build_regtype_array(prep_stmt->param_types,
+												prep_stmt->n_params);
+				values[2] = Int64GetDatum(prep_stmt->exec_count);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 05ec7f3ac6..604b0d0dd4 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -679,10 +679,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -966,6 +968,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index f09e3a9aff..6168eee90b 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -113,6 +113,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -646,6 +647,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 92c4fee8f8..45aafdc8a3 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -534,6 +534,11 @@ int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 int			tcp_user_timeout;
 
+
+int         autoprepare_threshold;
+int         autoprepare_limit;
+int         autoprepare_memory_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2268,6 +2273,39 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Threshold for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_threshold", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Threshold for autopreparing query."),
+		 gettext_noop("0 value disables autoprepare.")
+		},
+		&autoprepare_threshold,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		113, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_memory_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal size of memory used by autoprepared queries."),
+		 gettext_noop("0 means that there is no memory limit. Calculating memory used by prepared queries adds some extra overhead, "
+					  "so non-zero value of this parameter may cause some slowdown. autoprepare_limit is much faster way to limit number of autoprepared statements"),
+		 GUC_UNIT_KB
+		},
+		&autoprepare_memory_limit,
+		0, 0, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 87335248a0..df1ed4ef8a 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7577,6 +7577,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4035', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8}',
+  proargmodes => '{o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index 2ce832419f..11322ce07d 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/nodes/nodeFuncs.h b/src/include/nodes/nodeFuncs.h
index 0cb931c82c..962c17139d 100644
--- a/src/include/nodes/nodeFuncs.h
+++ b/src/include/nodes/nodeFuncs.h
@@ -148,6 +148,9 @@ extern Node *query_or_expression_tree_mutator(Node *node, Node *(*mutator) (),
 extern bool raw_expression_tree_walker(Node *node, bool (*walker) (),
 									   void *context);
 
+extern bool raw_expression_tree_mutator(Node *node, bool (*mutator) (),
+										void *context);
+
 struct PlanState;
 extern bool planstate_tree_walker(struct PlanState *planstate, bool (*walker) (),
 								  void *context);
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 9e24cfa661..db3e448899 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 							 long count,
 							 DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index e709177c37..eecedd50db 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -256,6 +256,10 @@ extern int	log_temp_files;
 extern double log_statement_sample_rate;
 extern double log_xact_sample_rate;
 
+extern int  autoprepare_threshold;
+extern int  autoprepare_limit;
+extern int  autoprepare_memory_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/test/regress/expected/autoprepare.out b/src/test/regress/expected/autoprepare.out
new file mode 100644
index 0000000000..728dab2c52
--- /dev/null
+++ b/src/test/regress/expected/autoprepare.out
@@ -0,0 +1,55 @@
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          1
+ select * from pg_autoprepared_statements;               | {}              |          1
+(2 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+                        statement                        | parameter_types | exec_count 
+---------------------------------------------------------+-----------------+------------
+ select count(*) from pg_class where relname='pg_class'; | {unknown}       |          2
+ select * from pg_autoprepared_statements;               | {}              |          2
+(2 rows)
+
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
+select count(*) from pg_class where relname='pg_class';
+ count 
+-------
+     1
+(1 row)
+
+select * from pg_autoprepared_statements;
+ statement | parameter_types | exec_count 
+-----------+-----------------+------------
+(0 rows)
+
diff --git a/src/test/regress/expected/date_1.out b/src/test/regress/expected/date_1.out
new file mode 100644
index 0000000000..b6101c7c7f
--- /dev/null
+++ b/src/test/regress/expected/date_1.out
@@ -0,0 +1,1477 @@
+--
+-- DATE
+--
+CREATE TABLE DATE_TBL (f1 date);
+INSERT INTO DATE_TBL VALUES ('1957-04-09');
+INSERT INTO DATE_TBL VALUES ('1957-06-13');
+INSERT INTO DATE_TBL VALUES ('1996-02-28');
+INSERT INTO DATE_TBL VALUES ('1996-02-29');
+INSERT INTO DATE_TBL VALUES ('1996-03-01');
+INSERT INTO DATE_TBL VALUES ('1996-03-02');
+INSERT INTO DATE_TBL VALUES ('1997-02-28');
+INSERT INTO DATE_TBL VALUES ('1997-02-29');
+ERROR:  date/time field value out of range: "1997-02-29"
+LINE 1: INSERT INTO DATE_TBL VALUES ('1997-02-29');
+                                     ^
+INSERT INTO DATE_TBL VALUES ('1997-03-01');
+INSERT INTO DATE_TBL VALUES ('1997-03-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-01');
+INSERT INTO DATE_TBL VALUES ('2000-04-02');
+INSERT INTO DATE_TBL VALUES ('2000-04-03');
+INSERT INTO DATE_TBL VALUES ('2038-04-08');
+INSERT INTO DATE_TBL VALUES ('2039-04-09');
+INSERT INTO DATE_TBL VALUES ('2040-04-10');
+SELECT f1 AS "Fifteen" FROM DATE_TBL;
+  Fifteen   
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+ 04-08-2038
+ 04-09-2039
+ 04-10-2040
+(15 rows)
+
+SELECT f1 AS "Nine" FROM DATE_TBL WHERE f1 < '2000-01-01';
+    Nine    
+------------
+ 04-09-1957
+ 06-13-1957
+ 02-28-1996
+ 02-29-1996
+ 03-01-1996
+ 03-02-1996
+ 02-28-1997
+ 03-01-1997
+ 03-02-1997
+(9 rows)
+
+SELECT f1 AS "Three" FROM DATE_TBL
+  WHERE f1 BETWEEN '2000-01-01' AND '2001-01-01';
+   Three    
+------------
+ 04-01-2000
+ 04-02-2000
+ 04-03-2000
+(3 rows)
+
+--
+-- Check all the documented input formats
+--
+SET datestyle TO iso;  -- display results in ISO
+SET datestyle TO ymd;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+ERROR:  date/time field value out of range: "1/8/1999"
+LINE 1: SELECT date '1/8/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2001-02-03
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+ERROR:  date/time field value out of range: "January 8, 99 BC"
+LINE 1: SELECT date 'January 8, 99 BC';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+ERROR:  date/time field value out of range: "08-Jan-99"
+LINE 1: SELECT date '08-Jan-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+ERROR:  date/time field value out of range: "Jan-08-99"
+LINE 1: SELECT date 'Jan-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+ERROR:  date/time field value out of range: "08 Jan 99"
+LINE 1: SELECT date '08 Jan 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+ERROR:  date/time field value out of range: "Jan 08 99"
+LINE 1: SELECT date 'Jan 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+ERROR:  date/time field value out of range: "08-01-99"
+LINE 1: SELECT date '08-01-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08-01-1999';
+ERROR:  date/time field value out of range: "08-01-1999"
+LINE 1: SELECT date '08-01-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-99';
+ERROR:  date/time field value out of range: "01-08-99"
+LINE 1: SELECT date '01-08-99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01-08-1999';
+ERROR:  date/time field value out of range: "01-08-1999"
+LINE 1: SELECT date '01-08-1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+ERROR:  date/time field value out of range: "08 01 99"
+LINE 1: SELECT date '08 01 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '08 01 1999';
+ERROR:  date/time field value out of range: "08 01 1999"
+LINE 1: SELECT date '08 01 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 99';
+ERROR:  date/time field value out of range: "01 08 99"
+LINE 1: SELECT date '01 08 99';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01 08 1999';
+ERROR:  date/time field value out of range: "01 08 1999"
+LINE 1: SELECT date '01 08 1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '99 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO dmy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '1/18/1999';
+ERROR:  date/time field value out of range: "1/18/1999"
+LINE 1: SELECT date '1/18/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '18/1/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-02-01
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  date/time field value out of range: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SET datestyle TO mdy;
+SELECT date 'January 8, 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999-01-18';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '1/8/1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1/18/1999';
+    date    
+------------
+ 1999-01-18
+(1 row)
+
+SELECT date '18/1/1999';
+ERROR:  date/time field value out of range: "18/1/1999"
+LINE 1: SELECT date '18/1/1999';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '01/02/03';
+    date    
+------------
+ 2003-01-02
+(1 row)
+
+SELECT date '19990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '990108';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '1999.008';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'J2451187';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'January 8, 99 BC';
+     date      
+---------------
+ 0099-01-08 BC
+(1 row)
+
+SELECT date '99-Jan-08';
+ERROR:  date/time field value out of range: "99-Jan-08"
+LINE 1: SELECT date '99-Jan-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-Jan-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-Jan-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-Jan';
+ERROR:  invalid input syntax for type date: "99-08-Jan"
+LINE 1: SELECT date '99-08-Jan';
+                    ^
+SELECT date '1999-08-Jan';
+ERROR:  invalid input syntax for type date: "1999-08-Jan"
+LINE 1: SELECT date '1999-08-Jan';
+                    ^
+SELECT date '99 Jan 08';
+ERROR:  invalid input syntax for type date: "99 Jan 08"
+LINE 1: SELECT date '99 Jan 08';
+                    ^
+SELECT date '1999 Jan 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 Jan 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date 'Jan 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 Jan';
+ERROR:  invalid input syntax for type date: "99 08 Jan"
+LINE 1: SELECT date '99 08 Jan';
+                    ^
+SELECT date '1999 08 Jan';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-01-08';
+ERROR:  date/time field value out of range: "99-01-08"
+LINE 1: SELECT date '99-01-08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-01-08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08-01-99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08-01-1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01-08-99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01-08-1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99-08-01';
+ERROR:  date/time field value out of range: "99-08-01"
+LINE 1: SELECT date '99-08-01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999-08-01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '99 01 08';
+ERROR:  date/time field value out of range: "99 01 08"
+LINE 1: SELECT date '99 01 08';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 01 08';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '08 01 99';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '08 01 1999';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+SELECT date '01 08 99';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '01 08 1999';
+    date    
+------------
+ 1999-01-08
+(1 row)
+
+SELECT date '99 08 01';
+ERROR:  date/time field value out of range: "99 08 01"
+LINE 1: SELECT date '99 08 01';
+                    ^
+HINT:  Perhaps you need a different "datestyle" setting.
+SELECT date '1999 08 01';
+    date    
+------------
+ 1999-08-01
+(1 row)
+
+-- Check upper and lower limits of date range
+SELECT date '4714-11-24 BC';
+     date      
+---------------
+ 4714-11-24 BC
+(1 row)
+
+SELECT date '4714-11-23 BC';  -- out of range
+ERROR:  date out of range: "4714-11-23 BC"
+LINE 1: SELECT date '4714-11-23 BC';
+                    ^
+SELECT date '5874897-12-31';
+     date      
+---------------
+ 5874897-12-31
+(1 row)
+
+SELECT date '5874898-01-01';  -- out of range
+ERROR:  date out of range: "5874898-01-01"
+LINE 1: SELECT date '5874898-01-01';
+                    ^
+RESET datestyle;
+--
+-- Simple math
+-- Leave most of it for the horology tests
+--
+SELECT f1 - date '2000-01-01' AS "Days From 2K" FROM DATE_TBL;
+ Days From 2K 
+--------------
+       -15607
+       -15542
+        -1403
+        -1402
+        -1401
+        -1400
+        -1037
+        -1036
+        -1035
+           91
+           92
+           93
+        13977
+        14343
+        14710
+(15 rows)
+
+SELECT f1 - date 'epoch' AS "Days From Epoch" FROM DATE_TBL;
+ Days From Epoch 
+-----------------
+           -4650
+           -4585
+            9554
+            9555
+            9556
+            9557
+            9920
+            9921
+            9922
+           11048
+           11049
+           11050
+           24934
+           25300
+           25667
+(15 rows)
+
+SELECT date 'yesterday' - date 'today' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'today' - date 'tomorrow' AS "One day";
+ One day 
+---------
+      -1
+(1 row)
+
+SELECT date 'yesterday' - date 'tomorrow' AS "Two days";
+ Two days 
+----------
+       -2
+(1 row)
+
+SELECT date 'tomorrow' - date 'today' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'today' - date 'yesterday' AS "One day";
+ One day 
+---------
+       1
+(1 row)
+
+SELECT date 'tomorrow' - date 'yesterday' AS "Two days";
+ Two days 
+----------
+        2
+(1 row)
+
+--
+-- test extract!
+--
+-- epoch
+--
+SELECT EXTRACT(EPOCH FROM DATE        '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '1970-01-01');     --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '1970-01-01+00');  --  0
+ date_part 
+-----------
+         0
+(1 row)
+
+--
+-- century
+--
+SELECT EXTRACT(CENTURY FROM DATE '0101-12-31 BC'); -- -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0100-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1900-12-31');    -- 19
+ date_part 
+-----------
+        19
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '1901-01-01');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2000-12-31');    -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM DATE '2001-01-01');    -- 21
+ date_part 
+-----------
+        21
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM CURRENT_DATE)>=21 AS True;     -- true
+ true 
+------
+ t
+(1 row)
+
+--
+-- millennium
+--
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-12-31 BC'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '0001-01-01 AD'); --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1000-12-31');    --  1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '1001-01-01');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2000-12-31');    --  2
+ date_part 
+-----------
+         2
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE '2001-01-01');    --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+-- next test to be fixed on the turn of the next millennium;-)
+SELECT EXTRACT(MILLENNIUM FROM CURRENT_DATE);         --  3
+ date_part 
+-----------
+         3
+(1 row)
+
+--
+-- decade
+--
+SELECT EXTRACT(DECADE FROM DATE '1994-12-25');    -- 199
+ date_part 
+-----------
+       199
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0010-01-01');    --   1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0009-12-31');    --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0001-01-01 BC'); --   0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0002-12-31 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0011-01-01 BC'); --  -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+SELECT EXTRACT(DECADE FROM DATE '0012-12-31 BC'); --  -2
+ date_part 
+-----------
+        -2
+(1 row)
+
+--
+-- some other types:
+--
+-- on a timestamp.
+SELECT EXTRACT(CENTURY FROM NOW())>=21 AS True;       -- true
+ true 
+------
+ t
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM TIMESTAMP '1970-03-20 04:30:00.00000'); -- 20
+ date_part 
+-----------
+        20
+(1 row)
+
+-- on an interval
+SELECT EXTRACT(CENTURY FROM INTERVAL '100 y');  -- 1
+ date_part 
+-----------
+         1
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '99 y');   -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-99 y');  -- 0
+ date_part 
+-----------
+         0
+(1 row)
+
+SELECT EXTRACT(CENTURY FROM INTERVAL '-100 y'); -- -1
+ date_part 
+-----------
+        -1
+(1 row)
+
+--
+-- test trunc function!
+--
+SELECT DATE_TRUNC('MILLENNIUM', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1001
+        date_trunc        
+--------------------------
+ Thu Jan 01 00:00:00 1001
+(1 row)
+
+SELECT DATE_TRUNC('MILLENNIUM', DATE '1970-03-20'); -- 1001-01-01
+          date_trunc          
+------------------------------
+ Thu Jan 01 00:00:00 1001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', TIMESTAMP '1970-03-20 04:30:00.00000'); -- 1901
+        date_trunc        
+--------------------------
+ Tue Jan 01 00:00:00 1901
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '1970-03-20'); -- 1901
+          date_trunc          
+------------------------------
+ Tue Jan 01 00:00:00 1901 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '2004-08-10'); -- 2001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 2001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0002-02-04'); -- 0001-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 0001 PST
+(1 row)
+
+SELECT DATE_TRUNC('CENTURY', DATE '0055-08-10 BC'); -- 0100-01-01 BC
+           date_trunc            
+---------------------------------
+ Tue Jan 01 00:00:00 0100 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '1993-12-25'); -- 1990-01-01
+          date_trunc          
+------------------------------
+ Mon Jan 01 00:00:00 1990 PST
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0004-12-25'); -- 0001-01-01 BC
+           date_trunc            
+---------------------------------
+ Sat Jan 01 00:00:00 0001 PST BC
+(1 row)
+
+SELECT DATE_TRUNC('DECADE', DATE '0002-12-31 BC'); -- 0011-01-01 BC
+           date_trunc            
+---------------------------------
+ Mon Jan 01 00:00:00 0011 PST BC
+(1 row)
+
+--
+-- test infinity
+--
+select 'infinity'::date, '-infinity'::date;
+   date   |   date    
+----------+-----------
+ infinity | -infinity
+(1 row)
+
+select 'infinity'::date > 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select '-infinity'::date < 'today'::date as t;
+ t 
+---
+ t
+(1 row)
+
+select isfinite('infinity'::date), isfinite('-infinity'::date), isfinite('today'::date);
+ isfinite | isfinite | isfinite 
+----------+----------+----------
+ f        | f        | t
+(1 row)
+
+--
+-- oscillating fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(HOUR FROM DATE 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM DATE '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMP   '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ 'infinity');      -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR FROM TIMESTAMPTZ '-infinity');     -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(MICROSECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MILLISECONDS  FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(SECOND        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MINUTE        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(HOUR          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DAY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(MONTH         FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(QUARTER       FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(WEEK          FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOW           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(ISODOW        FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(DOY           FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE      FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_M    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+SELECT EXTRACT(TIMEZONE_H    FROM DATE 'infinity');    -- NULL
+ date_part 
+-----------
+          
+(1 row)
+
+--
+-- monotonic fields from non-finite date/timestamptz:
+--
+SELECT EXTRACT(EPOCH FROM DATE 'infinity');         --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM DATE '-infinity');        -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMP   '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ 'infinity');  --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '-infinity'); -- -Infinity
+ date_part 
+-----------
+ -Infinity
+(1 row)
+
+-- all possible fields
+SELECT EXTRACT(YEAR       FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(DECADE     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(CENTURY    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(MILLENNIUM FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(JULIAN     FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(ISOYEAR    FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+SELECT EXTRACT(EPOCH      FROM DATE 'infinity');    --  Infinity
+ date_part 
+-----------
+  Infinity
+(1 row)
+
+--
+-- wrong fields from non-finite date:
+--
+SELECT EXTRACT(MICROSEC  FROM DATE 'infinity');     -- ERROR:  timestamp units "microsec" not recognized
+ERROR:  timestamp units "microsec" not recognized
+SELECT EXTRACT(UNDEFINED FROM DATE 'infinity');     -- ERROR:  timestamp units "undefined" not supported
+ERROR:  timestamp units "undefined" not supported
+-- test constructors
+select make_date(2013, 7, 15);
+ make_date  
+------------
+ 07-15-2013
+(1 row)
+
+select make_date(-44, 3, 15);
+   make_date   
+---------------
+ 03-15-0044 BC
+(1 row)
+
+select make_time(8, 20, 0.0);
+ make_time 
+-----------
+ 08:20:00
+(1 row)
+
+-- should fail
+select make_date(2013, 2, 30);
+ERROR:  date field value out of range: 2013-02-30
+select make_date(2013, 13, 1);
+ERROR:  date field value out of range: 2013-13-01
+select make_date(2013, 11, -1);
+ERROR:  date field value out of range: 2013-11--1
+select make_time(10, 55, 100.1);
+ERROR:  time field value out of range: 10:55:100.1
+select make_time(24, 0, 2.1);
+ERROR:  time field value out of range: 24:00:2.1
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 210e9cd146..e553a6977e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1307,6 +1307,10 @@ mvtest_tvv| SELECT sum(mvtest_tv.totamt) AS grandtot
    FROM mvtest_tv;
 mvtest_tvvmv| SELECT mvtest_tvvm.grandtot
    FROM mvtest_tvvm;
+pg_autoprepared_statements| SELECT p.statement,
+    p.parameter_types,
+    p.exec_count
+   FROM pg_autoprepared_statement() p(statement, parameter_types, exec_count);
 pg_available_extension_versions| SELECT e.name,
     e.version,
     (x.extname IS NOT NULL) AS installed,
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 8fb55f045e..35926d91e3 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -106,7 +106,7 @@ test: json jsonb json_encoding jsonpath jsonpath_encoding jsonb_jsonpath
 # NB: temp.sql does a reconnect which transiently uses 2 connections,
 # so keep this parallel group to at most 19 tests
 # ----------
-test: plancache limit plpgsql copy2 temp domain rangefuncs prepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
+test: plancache limit plpgsql copy2 temp domain rangefuncs prepare autoprepare conversion truncate alter_table sequence polymorphism rowtypes returning largeobject with xml
 
 # ----------
 # Another group of parallel tests
diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule
index a39ca1012a..912bbc6436 100644
--- a/src/test/regress/serial_schedule
+++ b/src/test/regress/serial_schedule
@@ -173,6 +173,7 @@ test: temp
 test: domain
 test: rangefuncs
 test: prepare
+test: autoprepare
 test: conversion
 test: truncate
 test: alter_table
diff --git a/src/test/regress/sql/autoprepare.sql b/src/test/regress/sql/autoprepare.sql
new file mode 100644
index 0000000000..98e4f7bc77
--- /dev/null
+++ b/src/test/regress/sql/autoprepare.sql
@@ -0,0 +1,11 @@
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 1;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
+set autoprepare_threshold = 0;
+select * from pg_autoprepared_statements;
+select count(*) from pg_class where relname='pg_class';
+select * from pg_autoprepared_statements;
#94Heikki Linnakangas
hlinnaka@iki.fi
In reply to: Konstantin Knizhnik (#93)
Re: [HACKERS] Cached plans and statement generalization

On 09/07/2019 23:59, Konstantin Knizhnik wrote:

Fixed patch version of the path is attached.

Much of the patch and the discussion has been around the raw parsing,
and guessing which literals are actually parameters that have been
inlined into the SQL text. Do we really need to do that, though? The
documentation mentions pgbouncer and other connection poolers, where you
don't have session state, as a use case for this. But such connection
poolers could and should still be using the extended query protocol,
with Parse+Bind messages and query parameters, even if they don't use
named prepared statements. I'd want to encourage applications and
middleware to use out-of-band query parameters, to avoid SQL injection
attacks, regardless of whether they use prepared statements or cache
query plans. So how about dropping all the raw parse tree stuff, and
doing the automatic caching of plans based on the SQL string, somewhere
in the exec_parse_message? Check the autoprepare cache in
exec_parse_message(), if it was an "unnamed" prepared statement, i.e. if
the prepared statement name is an empty string.

(I'm actually not sure if exec_parse/bind_message is the right place for
this, but I saw that your current patch did it in exec_simple_query, and
exec_parse/bind_message are the equivalent of that for the extended
query protocol).

- Heikki

#95Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Heikki Linnakangas (#94)
Re: [HACKERS] Cached plans and statement generalization

On 31.07.2019 19:56, Heikki Linnakangas wrote:

On 09/07/2019 23:59, Konstantin Knizhnik wrote:

Fixed patch version of the path is attached.

Much of the patch and the discussion has been around the raw parsing,
and guessing which literals are actually parameters that have been
inlined into the SQL text. Do we really need to do that, though? The
documentation mentions pgbouncer and other connection poolers, where
you don't have session state, as a use case for this. But such
connection poolers could and should still be using the extended query
protocol, with Parse+Bind messages and query parameters, even if they
don't use named prepared statements. I'd want to encourage
applications and middleware to use out-of-band query parameters, to
avoid SQL injection attacks, regardless of whether they use prepared
statements or cache query plans. So how about dropping all the raw
parse tree stuff, and doing the automatic caching of plans based on
the SQL string, somewhere in the exec_parse_message? Check the
autoprepare cache in exec_parse_message(), if it was an "unnamed"
prepared statement, i.e. if the prepared statement name is an empty
string.

(I'm actually not sure if exec_parse/bind_message is the right place
for this, but I saw that your current patch did it in
exec_simple_query, and exec_parse/bind_message are the equivalent of
that for the extended query protocol).

It will significantly simplify this patch and eliminate all complexity
and troubles  caused by replacing string literals with parameters
if assumption that all client applications are using extended query
protocol is true.
But I am not sure that we can expect it. At least I myself see many
applications which are constructing queries by embedding literal values.
May be it is not so good and safe (SQL injection attack), but many
applications are doing it. And the idea was to improve execution speed
of existed application without changing them.

Also please notice that extended protocol requires passing more message
which has negative effect on performance.
At my  notebook I get about 21k TPS on "pgbench -S" and 18k TPS on
"pgbench -M extended -S".
And it is with unix socket connection! I think that in case of remote
connections difference will be even larger.

So may be committing simple version of this patch which do not need to
solve any challenged problems is good idea.
But I afraid that it will significantly reduce positive effect of this
patch.

#96Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Heikki Linnakangas (#94)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 31.07.2019 19:56, Heikki Linnakangas wrote:

On 09/07/2019 23:59, Konstantin Knizhnik wrote:

Fixed patch version of the path is attached.

Much of the patch and the discussion has been around the raw parsing,
and guessing which literals are actually parameters that have been
inlined into the SQL text. Do we really need to do that, though? The
documentation mentions pgbouncer and other connection poolers, where
you don't have session state, as a use case for this. But such
connection poolers could and should still be using the extended query
protocol, with Parse+Bind messages and query parameters, even if they
don't use named prepared statements. I'd want to encourage
applications and middleware to use out-of-band query parameters, to
avoid SQL injection attacks, regardless of whether they use prepared
statements or cache query plans. So how about dropping all the raw
parse tree stuff, and doing the automatic caching of plans based on
the SQL string, somewhere in the exec_parse_message? Check the
autoprepare cache in exec_parse_message(), if it was an "unnamed"
prepared statement, i.e. if the prepared statement name is an empty
string.

(I'm actually not sure if exec_parse/bind_message is the right place
for this, but I saw that your current patch did it in
exec_simple_query, and exec_parse/bind_message are the equivalent of
that for the extended query protocol).

- Heikki

I decided to implement your proposal. Much simple version of autoprepare
patch is attached.
At my computer I got the following results:

 pgbench -M simple -S         22495 TPS
 pgbench -M extended -S    47633 TPS
 pgbench -M prepared -S    47683 TPS

So autoprepare speedup execution of queries sent using extended protocol
more than twice and it is almost the same as with explicitly prepared
statements.
I failed to create regression test for it because I do not know how to
force psql to use extended protocol. Any advice is welcome.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-extended-1.patchtext/x-patch; name=autoprepare-extended-1.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..ee29e36
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres stores plans for all queries pass using extended protocol.
+    Speed of execution of autoprepared statements is almost the same as of explicitly prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_limit</varname> or <varname>autoprepare_memory_limit</varname>.
+    This variables specify limit for number of autoprepared statement or memory used by them.
+    Autoprepare is enabled if one of thme is non zero. Value -1 means unlimited.
+    Please notice that event autoprepare is anabled, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    Too large number of autoprepared statements can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). So using unlimited autoprepare hash is debgerous and not recommended.
+    Postgres is using LRU policy to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    Autoprepare hash is local to the backend. It is implicitely reseted on any change of database schema or
+    session variables.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals) and
+    query execution counter.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index c91e3e1..73c07e7 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5326,6 +5326,39 @@ SELECT * FROM parent WHERE key = 2400;
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-autoprepare-limit" xreflabel="autoprepare_limit">
+       <term><varname>autoprepare_limit</varname> (<type>integer/type>)
+           <indexterm>
+             <primary><varname>autoprepare_limit</varname> configuration parameter</primary>
+           </indexterm>
+       </term>
+       <listitem>
+         <para>
+           Maximal number of autoprepared queries.
+           Zero value disables autoprepare, -1 means unlimited number of autoprepared queries.
+           Too large number of prepared queries can cause backend memory overflow and slowdown execution speed
+           (because of increased lookup time). Default value is <literal>0</literal>.
+         </para>
+       </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-memory-limit" xreflabel="autoprepare_memory_limit">
+       <term><varname>autoprepare_memory_limit</varname> (<type>integer/type>)
+           <indexterm>
+             <primary><varname>autoprepare_memory_limit</varname> configuration parameter</primary>
+           </indexterm>
+       </term>
+       <listitem>
+         <para>
+           Maximal size of memory used by autoprepared queries.
+           Zero value disables autoprepare, -1 means means that there is no memory limit.
+           Default value is <literal>0</literal>. Calculating memory used by prepared queries adds some extra overhead,
+           so positive value of this parameter may cause some slowdown.
+           <varname>autoprepare_limit</varname> is much faster way to limit number of autoprepared statements.
+         </para>
+       </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
    </sect1>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..4bd9f31 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ab9f279 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index ea4c85e..f5b9481 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -332,6 +332,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
     l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index c12b613..ea7a5b1 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 									const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -787,7 +786,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index a6505c7..3d149a6 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -71,6 +71,8 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
@@ -171,6 +173,8 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -197,6 +201,167 @@ static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
 
+/* ----------------
+ *		autoprepare stuff
+ * ----------------
+ */
+typedef struct
+{
+	char* query_string;
+	Oid*  param_types;
+	int   num_params;
+} AutoprepareStatementKey;
+
+typedef struct
+{
+	AutoprepareStatementKey key;
+	CachedPlanSource* plan;
+	dlist_node		  lru;		  /* double linked list to implement LRU */
+	int64			  exec_count; /* counter of execution of this query */
+}  AutoprepareStatement;
+
+static AutoprepareStatement* autoprepare_stmt;
+static HTAB* autoprepare_hash;
+static dlist_head autoprepare_lru;
+static MemoryContext autoprepare_context;
+
+#define AUTOPREPARE_HASH_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+static size_t autoprepare_cached_plans;
+static size_t autoprepare_used_memory;
+
+static uint32 autoprepare_hash_fn(const void *key, Size keysize)
+{
+	uint32 hash = 0;
+	int i;
+	AutoprepareStatementKey* ask = (AutoprepareStatementKey*)key;
+	for (i = 0; i < ask->num_params; i++)
+		hash ^= ask->param_types[i];
+	return string_hash(ask->query_string, strlen(ask->query_string)) ^ hash;
+}
+
+static int autoprepare_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	AutoprepareStatementKey* ask1 = (AutoprepareStatementKey*)key1;
+	AutoprepareStatementKey* ask2 = (AutoprepareStatementKey*)key2;
+	if (ask1->num_params != ask2->num_params)
+		return 1;
+	if (memcmp(ask1->param_types, ask2->param_types, ask1->num_params*sizeof(Oid)) != 0)
+		return 1;
+
+	return strcmp(ask1->query_string, ask2->query_string);
+}
+
+static Size
+get_memory_context_size(MemoryContext ctx)
+{
+	MemoryContextCounters counters = {0};
+	ctx->methods->stats(ctx, NULL, NULL, &counters);
+	return counters.totalspace;
+}
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(3);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (autoprepare_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		AutoprepareStatement *autoprep_stmt;
+
+		hash_seq_init(&hash_seq, autoprepare_hash);
+		while ((autoprep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (autoprep_stmt->plan != NULL)
+			{
+				Datum		values[3];
+				bool		nulls[3];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(autoprep_stmt->key.query_string);
+				values[1] = build_regtype_array(autoprep_stmt->key.param_types,
+												autoprep_stmt->key.num_params);
+				values[2] = Int64GetDatum(autoprep_stmt->exec_count);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
+
+void ResetAutoprepareCache(void)
+{
+	if (autoprepare_hash != NULL)
+	{
+		hash_destroy(autoprepare_hash);
+		MemoryContextReset(autoprepare_context);
+		dlist_init(&autoprepare_lru);
+		autoprepare_cached_plans = 0;
+		autoprepare_used_memory = 0;
+		autoprepare_hash = 0;
+	}
+}
+
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -1393,14 +1558,103 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	}
 	else
 	{
-		/* Unnamed prepared statement --- release any prior unnamed stmt */
-		drop_unnamed_stmt();
-		/* Create context for parsing */
-		unnamed_stmt_context =
-			AllocSetContextCreate(MessageContext,
-								  "unnamed prepared statement",
-								  ALLOCSET_DEFAULT_SIZES);
-		oldcontext = MemoryContextSwitchTo(unnamed_stmt_context);
+		/* check if autoprepare is enabled */
+		if (autoprepare_limit || autoprepare_memory_limit)
+		{
+			bool found;
+			AutoprepareStatementKey key;
+			key.query_string = (char*)query_string;
+			key.param_types = paramTypes;
+			key.num_params = numParams;
+
+			/*
+			 * Construct autoprepare context if not constructed yet.
+			 */
+			if (!autoprepare_context)
+				autoprepare_context = AllocSetContextCreate(CacheMemoryContext,
+														   "autoprepare context",
+														   ALLOCSET_DEFAULT_SIZES);
+
+			oldcontext = MemoryContextSwitchTo(autoprepare_context);
+
+			if (autoprepare_hash == NULL) /*initialize hash if not done yet */
+			{
+				static HASHCTL info;
+				info.keysize = sizeof(AutoprepareStatementKey);
+				info.entrysize = sizeof(AutoprepareStatement);
+				info.hash = autoprepare_hash_fn;
+				info.match = autoprepare_match_fn;
+				autoprepare_hash = hash_create("autoprepare_hash", autoprepare_limit != 0 ? autoprepare_limit : AUTOPREPARE_HASH_SIZE,
+											   &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+				dlist_init(&autoprepare_lru);
+			}
+			autoprepare_stmt = (AutoprepareStatement*)hash_search(autoprepare_hash, &key, HASH_ENTER, &found);
+			if (found)
+			{
+				autoprepare_stmt->exec_count += 1;
+				dlist_delete(&autoprepare_stmt->lru); /* accessed entry will be moved to the head of LRU list */
+				if (autoprepare_stmt->plan != NULL && !autoprepare_stmt->plan->is_valid)
+				{
+					/* Drop invalidated plan: it will be reconstructed later */
+					if (autoprepare_memory_limit)
+						autoprepare_used_memory -= get_memory_context_size(autoprepare_stmt->plan->context);
+					DropCachedPlan(autoprepare_stmt->plan);
+					autoprepare_stmt->plan = NULL;
+				}
+				else
+				{
+					dlist_insert_after(&autoprepare_lru.head, &autoprepare_stmt->lru); /* prepend entry to the head of LRU list */
+					goto ParseCompleted;
+				}
+			}
+			else
+			{
+				/* Initialize autoprepare hash entry */
+				autoprepare_cached_plans += 1;
+				autoprepare_stmt->plan = NULL;
+				autoprepare_stmt->exec_count = 1;
+
+				/* Copy query text and parameter type info to CacheMemoryContext */
+				autoprepare_stmt->key.query_string = pstrdup(query_string);
+				autoprepare_stmt->key.param_types = (Oid*)palloc(sizeof(Oid)*numParams);
+				memcpy(autoprepare_stmt->key.param_types, paramTypes, sizeof(Oid)*numParams);
+
+				/* LRU eviction policy */
+				while ((autoprepare_limit > 0 && autoprepare_cached_plans > autoprepare_limit)
+					   || (autoprepare_memory_limit > 0 && autoprepare_used_memory > (size_t)autoprepare_memory_limit*1024))
+				{
+					/* Drop least recently accessed query */
+					AutoprepareStatement* victim = dlist_container(AutoprepareStatement, lru, autoprepare_lru.head.prev);
+					dlist_delete(&victim->lru);
+					if (victim->plan)
+					{
+						if (autoprepare_memory_limit)
+							autoprepare_used_memory -= get_memory_context_size(victim->plan->context);
+						DropCachedPlan(victim->plan);
+					}
+					hash_search(autoprepare_hash, victim, HASH_REMOVE, NULL);
+					pfree(victim->key.query_string);
+					pfree(victim->key.param_types);
+					autoprepare_cached_plans -= 1;
+				}
+			}
+			dlist_insert_after(&autoprepare_lru.head, &autoprepare_stmt->lru); /* prepend entry to the head of LRU list */
+
+			/* Autoprepare statements also parsed in MessageContext */
+			MemoryContextSwitchTo(MessageContext);
+		}
+		else
+		{
+			/* Unnamed prepared statement --- release any prior unnamed stmt */
+			drop_unnamed_stmt();
+			/* Create context for parsing */
+			unnamed_stmt_context =
+				AllocSetContextCreate(MessageContext,
+									  "unnamed prepared statement",
+									  ALLOCSET_DEFAULT_SIZES);
+			oldcontext = MemoryContextSwitchTo(unnamed_stmt_context);
+			autoprepare_stmt = NULL;
+		}
 	}
 
 	/*
@@ -1539,13 +1793,17 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	}
 	else
 	{
-		/*
-		 * We just save the CachedPlanSource into unnamed_stmt_psrc.
-		 */
 		SaveCachedPlan(psrc);
-		unnamed_stmt_psrc = psrc;
+		if (autoprepare_stmt)
+			autoprepare_stmt->plan = psrc;
+		else
+			/*
+			 * We just save the CachedPlanSource into unnamed_stmt_psrc.
+			 */
+			unnamed_stmt_psrc = psrc;
 	}
 
+  ParseCompleted:
 	MemoryContextSwitchTo(oldcontext);
 
 	/*
@@ -1633,7 +1891,7 @@ exec_bind_message(StringInfo input_message)
 	else
 	{
 		/* special-case the unnamed statement */
-		psrc = unnamed_stmt_psrc;
+		psrc = autoprepare_stmt ? autoprepare_stmt->plan : unnamed_stmt_psrc;
 		if (!psrc)
 			ereport(ERROR,
 					(errcode(ERRCODE_UNDEFINED_PSTATEMENT),
@@ -2457,7 +2715,7 @@ exec_describe_statement_message(const char *stmt_name)
 	else
 	{
 		/* special-case the unnamed statement */
-		psrc = unnamed_stmt_psrc;
+		psrc = autoprepare_stmt ? autoprepare_stmt->plan : unnamed_stmt_psrc;
 		if (!psrc)
 			ereport(ERROR,
 					(errcode(ERRCODE_UNDEFINED_PSTATEMENT),
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 7f6f0b6..e0ab57f 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -679,10 +679,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -966,6 +968,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index f09e3a9..6168eee 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -113,6 +113,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -646,6 +647,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index fc46360..39deebe 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -534,6 +534,9 @@ int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 int			tcp_user_timeout;
 
+int			autoprepare_limit;
+int			autoprepare_memory_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2268,6 +2271,31 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Limits for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 disable autoprepare, -1 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		0, -1, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_memory_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal size of memory used by autoprepared queries."),
+		 gettext_noop("0 disables autoprepare, -1 means that there is no memory limit. "
+					  "Calculating memory used by prepared queries adds some extra overhead, "
+					  "so non-zero value of this parameter may cause some slowdown. autoprepare_limit is much faster way to limit number of autoprepared statements"),
+		 GUC_UNIT_KB
+		},
+		&autoprepare_memory_limit,
+		0, -1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 0902dce..ea40a1d 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7608,6 +7608,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4035', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8}',
+  proargmodes => '{o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index 2ce8324..11322ce 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 9e24cfa..db3e448 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 							 long count,
 							 DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index e800230..d06a04c 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -256,6 +256,9 @@ extern int	log_temp_files;
 extern double log_statement_sample_rate;
 extern double log_xact_sample_rate;
 
+extern int  autoprepare_limit;
+extern int  autoprepare_memory_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
#97Daniel Migowski
dmigowski@ikoffice.de
In reply to: Konstantin Knizhnik (#96)
Re: [HACKERS] Cached plans and statement generalization

Am 01.08.2019 um 18:56 schrieb Konstantin Knizhnik:

I decided to implement your proposal. Much simple version of
autoprepare patch is attached.
At my computer I got the following results:

 pgbench -M simple -S         22495 TPS
 pgbench -M extended -S    47633 TPS
 pgbench -M prepared -S    47683 TPS

So autoprepare speedup execution of queries sent using extended
protocol more than twice and it is almost the same as with explicitly
prepared statements.
I failed to create regression test for it because I do not know how to
force psql to use extended protocol. Any advice is welcome.

I am very interested in such a patch, because currently I use the same
functionality within my JDBC driver and having this directly in
PostgreSQL would surely speed things up.

I have two suggestions however:

1. Please allow to gather information about the autoprepared statements
by returning them in pg_prepared_statements view. I would love to
monitor usage of them as well as the memory consumption that occurs. I
suggested a patch to display that in
/messages/by-id/41ED3F5450C90F4D8381BC4D8DF6BBDCF02E10B4@EXCHANGESERVER.ikoffice.de

2. Please not only use a LRU list, but maybe something which would
prefer statements that get reused at all. We often create ad hoc
statements with parameters which don't really need to be kept. Maybe I
can suggest an implementation of an LRU list where a reusal of a
statement not only pulls it to the top of the list but also increases a
reuse counter. When then a statement would drop off the end of the list
one checks if the reusal count is non-zero, and only really drops it if
the resual count is zero. Else the reusal count is decremented (or
halved) and the element is also placed at the start of the LRU list, so
it is kept a bit longer.

Thanks,

Daniel Migowski

#98Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Daniel Migowski (#97)
Re: [HACKERS] Cached plans and statement generalization

On 02.08.2019 11:25, Daniel Migowski wrote:

Am 01.08.2019 um 18:56 schrieb Konstantin Knizhnik:

I decided to implement your proposal. Much simple version of
autoprepare patch is attached.
At my computer I got the following results:

 pgbench -M simple -S         22495 TPS
 pgbench -M extended -S    47633 TPS
 pgbench -M prepared -S    47683 TPS

So autoprepare speedup execution of queries sent using extended
protocol more than twice and it is almost the same as with explicitly
prepared statements.
I failed to create regression test for it because I do not know how
to force psql to use extended protocol. Any advice is welcome.

I am very interested in such a patch, because currently I use the same
functionality within my JDBC driver and having this directly in
PostgreSQL would surely speed things up.

I have two suggestions however:

1. Please allow to gather information about the autoprepared
statements by returning them in pg_prepared_statements view. I would
love to monitor usage of them as well as the memory consumption that
occurs. I suggested a patch to display that in
/messages/by-id/41ED3F5450C90F4D8381BC4D8DF6BBDCF02E10B4@EXCHANGESERVER.ikoffice.de

Sorry, but there is pg_autoprepared_statements view. I think that it
will be better to distinguish explicitly and implicitly prepared
statements, will not it?
Do you want to add more information to this view? Right now it shows
query string, types of parameters and number of this query execution.

2. Please not only use a LRU list, but maybe something which would
prefer statements that get reused at all. We often create ad hoc
statements with parameters which don't really need to be kept. Maybe I
can suggest an implementation of an LRU list where a reusal of a
statement not only pulls it to the top of the list but also increases
a reuse counter. When then a statement would drop off the end of the
list one checks if the reusal count is non-zero, and only really drops
it if the resual count is zero. Else the reusal count is decremented
(or halved) and the element is also placed at the start of the LRU
list, so it is kept a bit longer.

There are many variations of LRU and alternatives: clock, segmented LRU, ...
I agree that classical LRU may be not the best replacement policy for
autoprepare.
Application is either use static queries, either constructing them
dynamically. It will be better to distinguish queries executed at least
twice from one-shot queries.
So may be SLRU will be the best choice. But actually situation you have
described (We often create ad hoc statements with parameters which don't
really need to be kept)
is not applicable to atutoprepare. Autoprepare deals only with
statements which are actually executed.

We have another patch which limits number of stored prepared plans to
avoid memory overflow (in case of using stored procedures which
implicitly prepare all issued queries).
Here choice of replacement policy is more critical.

Thanks,

Daniel Migowski

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#99Daniel Migowski
dmigowski@ikoffice.de
In reply to: Konstantin Knizhnik (#98)
Re: [HACKERS] Cached plans and statement generalization

Am 02.08.2019 um 10:57 schrieb Konstantin Knizhnik:

On 02.08.2019 11:25, Daniel Migowski wrote:

I have two suggestions however:

1. Please allow to gather information about the autoprepared
statements by returning them in pg_prepared_statements view. I would
love to monitor usage of them as well as the memory consumption that
occurs. I suggested a patch to display that in
/messages/by-id/41ED3F5450C90F4D8381BC4D8DF6BBDCF02E10B4@EXCHANGESERVER.ikoffice.de

Sorry, but there is pg_autoprepared_statements view. I think that it
will be better to distinguish explicitly and implicitly prepared
statements, will not it?
Do you want to add more information to this view? Right now it shows
query string, types of parameters and number of this query execution.

My sorry, I didn't notice it in the patch. If there is another view for
those that's perfectly fine.

2. Please not only use a LRU list, but maybe something which would
prefer statements that get reused at all. We often create ad hoc
statements with parameters which don't really need to be kept. Maybe
I can suggest an implementation of an LRU list where a reusal of a
statement not only pulls it to the top of the list but also increases
a reuse counter. When then a statement would drop off the end of the
list one checks if the reusal count is non-zero, and only really
drops it if the resual count is zero. Else the reusal count is
decremented (or halved) and the element is also placed at the start
of the LRU list, so it is kept a bit longer.

There are many variations of LRU and alternatives: clock, segmented
LRU, ...
I agree that classical LRU may be not the best replacement policy for
autoprepare.
Application is either use static queries, either constructing them
dynamically. It will be better to distinguish queries executed at
least twice from one-shot queries.
So may be SLRU will be the best choice. But actually situation you
have described (We often create ad hoc statements with parameters
which don't really need to be kept)
is not applicable to atutoprepare. Autoprepare deals only with
statements which are actually executed.

We have another patch which limits number of stored prepared plans to
avoid memory overflow (in case of using stored procedures which
implicitly prepare all issued queries).
Here choice of replacement policy is more critical.

Alright, just wanted to have something better than a LRU, so it is not a
stepback from the implementation in the JDBC driver for us.

Kind regards,
Daniel Migowski

#100Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Konstantin Knizhnik (#96)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 01.08.2019 19:56, Konstantin Knizhnik wrote:

On 31.07.2019 19:56, Heikki Linnakangas wrote:

On 09/07/2019 23:59, Konstantin Knizhnik wrote:

Fixed patch version of the path is attached.

Much of the patch and the discussion has been around the raw parsing,
and guessing which literals are actually parameters that have been
inlined into the SQL text. Do we really need to do that, though? The
documentation mentions pgbouncer and other connection poolers, where
you don't have session state, as a use case for this. But such
connection poolers could and should still be using the extended query
protocol, with Parse+Bind messages and query parameters, even if they
don't use named prepared statements. I'd want to encourage
applications and middleware to use out-of-band query parameters, to
avoid SQL injection attacks, regardless of whether they use prepared
statements or cache query plans. So how about dropping all the raw
parse tree stuff, and doing the automatic caching of plans based on
the SQL string, somewhere in the exec_parse_message? Check the
autoprepare cache in exec_parse_message(), if it was an "unnamed"
prepared statement, i.e. if the prepared statement name is an empty
string.

(I'm actually not sure if exec_parse/bind_message is the right place
for this, but I saw that your current patch did it in
exec_simple_query, and exec_parse/bind_message are the equivalent of
that for the extended query protocol).

- Heikki

I decided to implement your proposal. Much simple version of
autoprepare patch is attached.
At my computer I got the following results:

 pgbench -M simple -S         22495 TPS
 pgbench -M extended -S    47633 TPS
 pgbench -M prepared -S    47683 TPS

So autoprepare speedup execution of queries sent using extended
protocol more than twice and it is almost the same as with explicitly
prepared statements.
I failed to create regression test for it because I do not know how to
force psql to use extended protocol. Any advice is welcome.

Slightly improved and rebased version of the patch.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-extended-2.patchtext/x-patch; name=autoprepare-extended-2.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..c4d96e6
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres stores plans for all queries pass using extended protocol.
+    Speed of execution of autoprepared statements is almost the same as of explicitly prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_limit</varname> or <varname>autoprepare_memory_limit</varname>.
+    This variables specify limit for number of autoprepared statement or memory used by them.
+    Autoprepare is enabled if one of thme is non zero. Value -1 means unlimited.
+    Please notice that event autoprepare is anabled, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    Too large number of autoprepared statements can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). So using unlimited autoprepare hash is debgerous and not recommended.
+    Postgres is using LRU policy to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    Autoprepare hash is local to the backend. It is implicitely reseted on any change of database schema or
+    session variables.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals),
+    query execution counter and used memory.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index cdc30fa..28c9343 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5326,6 +5326,39 @@ SELECT * FROM parent WHERE key = 2400;
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-autoprepare-limit" xreflabel="autoprepare_limit">
+       <term><varname>autoprepare_limit</varname> (<type>integer/type>)
+           <indexterm>
+             <primary><varname>autoprepare_limit</varname> configuration parameter</primary>
+           </indexterm>
+       </term>
+       <listitem>
+         <para>
+           Maximal number of autoprepared queries.
+           Zero value disables autoprepare, -1 means unlimited number of autoprepared queries.
+           Too large number of prepared queries can cause backend memory overflow and slowdown execution speed
+           (because of increased lookup time). Default value is <literal>0</literal>.
+         </para>
+       </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-memory-limit" xreflabel="autoprepare_memory_limit">
+       <term><varname>autoprepare_memory_limit</varname> (<type>integer/type>)
+           <indexterm>
+             <primary><varname>autoprepare_memory_limit</varname> configuration parameter</primary>
+           </indexterm>
+       </term>
+       <listitem>
+         <para>
+           Maximal size of memory used by autoprepared queries.
+           Zero value disables autoprepare, -1 means means that there is no memory limit.
+           Default value is <literal>0</literal>. Calculating memory used by prepared queries adds some extra overhead,
+           so positive value of this parameter may cause some slowdown.
+           <varname>autoprepare_limit</varname> is much faster way to limit number of autoprepared statements.
+         </para>
+       </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
    </sect1>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..4bd9f31 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index 3e115f1..ab9f279 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index ea4c85e..f5b9481 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -332,6 +332,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
     l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index 7e0a041..795b1e5 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 									const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -787,7 +786,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index e8d8e6f..423c7f7 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -71,6 +71,8 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
@@ -171,6 +173,8 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -197,6 +201,163 @@ static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
 
+/* ----------------
+ *		autoprepare stuff
+ * ----------------
+ */
+typedef struct
+{
+	char* query_string;
+	Oid*  param_types;
+	int   num_params;
+} AutoprepareStatementKey;
+
+typedef struct
+{
+	AutoprepareStatementKey key;
+	CachedPlanSource* plan;
+	dlist_node		  lru;		   /* double linked list to implement LRU */
+	int64			  exec_count;  /* counter of execution of this query */
+	Size              used_memory; /* memory used by prepared statement */
+}  AutoprepareStatement;
+
+static AutoprepareStatement* autoprepare_stmt;
+static HTAB* autoprepare_hash;
+static dlist_head autoprepare_lru;
+static MemoryContext autoprepare_context;
+
+#define AUTOPREPARE_HASH_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+static size_t autoprepare_cached_plans;
+static size_t autoprepare_used_memory;
+
+static uint32 autoprepare_hash_fn(const void *key, Size keysize)
+{
+	uint32 hash = 0;
+	int i;
+	AutoprepareStatementKey* ask = (AutoprepareStatementKey*)key;
+	for (i = 0; i < ask->num_params; i++)
+		hash ^= ask->param_types[i];
+	return string_hash(ask->query_string, strlen(ask->query_string)) ^ hash;
+}
+
+static int autoprepare_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	AutoprepareStatementKey* ask1 = (AutoprepareStatementKey*)key1;
+	AutoprepareStatementKey* ask2 = (AutoprepareStatementKey*)key2;
+	if (ask1->num_params != ask2->num_params)
+		return 1;
+	if (memcmp(ask1->param_types, ask2->param_types, ask1->num_params*sizeof(Oid)) != 0)
+		return 1;
+
+	return strcmp(ask1->query_string, ask2->query_string);
+}
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(4);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "used_memory",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (autoprepare_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		AutoprepareStatement *autoprep_stmt;
+
+		hash_seq_init(&hash_seq, autoprepare_hash);
+		while ((autoprep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (autoprep_stmt->plan != NULL)
+			{
+				Datum		values[4];
+				bool		nulls[4];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(autoprep_stmt->key.query_string);
+				values[1] = build_regtype_array(autoprep_stmt->key.param_types,
+												autoprep_stmt->key.num_params);
+				values[2] = Int64GetDatum(autoprep_stmt->exec_count);
+				values[3] = Int64GetDatum(autoprep_stmt->used_memory);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
+
+void ResetAutoprepareCache(void)
+{
+	if (autoprepare_hash != NULL)
+	{
+		hash_destroy(autoprepare_hash);
+		MemoryContextReset(autoprepare_context);
+		dlist_init(&autoprepare_lru);
+		autoprepare_cached_plans = 0;
+		autoprepare_used_memory = 0;
+		autoprepare_hash = 0;
+	}
+}
+
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -1393,14 +1554,105 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	}
 	else
 	{
-		/* Unnamed prepared statement --- release any prior unnamed stmt */
-		drop_unnamed_stmt();
-		/* Create context for parsing */
-		unnamed_stmt_context =
-			AllocSetContextCreate(MessageContext,
-								  "unnamed prepared statement",
-								  ALLOCSET_DEFAULT_SIZES);
-		oldcontext = MemoryContextSwitchTo(unnamed_stmt_context);
+		/* check if autoprepare is enabled */
+		if (autoprepare_limit || autoprepare_memory_limit)
+		{
+			bool found;
+			AutoprepareStatementKey key;
+			key.query_string = (char*)query_string;
+			key.param_types = paramTypes;
+			key.num_params = numParams;
+
+			/*
+			 * Construct autoprepare context if not constructed yet.
+			 */
+			if (!autoprepare_context)
+				autoprepare_context = AllocSetContextCreate(CacheMemoryContext,
+														   "autoprepare context",
+														   ALLOCSET_DEFAULT_SIZES);
+
+			oldcontext = MemoryContextSwitchTo(autoprepare_context);
+
+			if (autoprepare_hash == NULL) /*initialize hash if not done yet */
+			{
+				static HASHCTL info;
+				info.keysize = sizeof(AutoprepareStatementKey);
+				info.entrysize = sizeof(AutoprepareStatement);
+				info.hash = autoprepare_hash_fn;
+				info.match = autoprepare_match_fn;
+				autoprepare_hash = hash_create("autoprepare_hash", autoprepare_limit != 0 ? autoprepare_limit : AUTOPREPARE_HASH_SIZE,
+											   &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+				dlist_init(&autoprepare_lru);
+			}
+			autoprepare_stmt = (AutoprepareStatement*)hash_search(autoprepare_hash, &key, HASH_ENTER, &found);
+			if (found)
+			{
+				autoprepare_stmt->exec_count += 1;
+				dlist_delete(&autoprepare_stmt->lru); /* accessed entry will be moved to the head of LRU list */
+				if (autoprepare_stmt->plan != NULL && !autoprepare_stmt->plan->is_valid)
+				{
+					/* Drop invalidated plan: it will be reconstructed later */
+					if (autoprepare_memory_limit)
+						autoprepare_used_memory -= autoprepare_stmt->used_memory;
+					DropCachedPlan(autoprepare_stmt->plan);
+					autoprepare_stmt->plan = NULL;
+					autoprepare_stmt->used_memory = 0;
+				}
+				else
+				{
+					dlist_insert_after(&autoprepare_lru.head, &autoprepare_stmt->lru); /* prepend entry to the head of LRU list */
+					goto ParseCompleted;
+				}
+			}
+			else
+			{
+				/* Initialize autoprepare hash entry */
+				autoprepare_cached_plans += 1;
+				autoprepare_stmt->plan = NULL;
+				autoprepare_stmt->used_memory = 0;
+				autoprepare_stmt->exec_count = 1;
+
+				/* Copy query text and parameter type info to CacheMemoryContext */
+				autoprepare_stmt->key.query_string = pstrdup(query_string);
+				autoprepare_stmt->key.param_types = (Oid*)palloc(sizeof(Oid)*numParams);
+				memcpy(autoprepare_stmt->key.param_types, paramTypes, sizeof(Oid)*numParams);
+
+				/* LRU eviction policy */
+				while ((autoprepare_limit > 0 && autoprepare_cached_plans > autoprepare_limit)
+					   || (autoprepare_memory_limit > 0 && autoprepare_used_memory > (size_t)autoprepare_memory_limit*1024))
+				{
+					/* Drop least recently accessed query */
+					AutoprepareStatement* victim = dlist_container(AutoprepareStatement, lru, autoprepare_lru.head.prev);
+					dlist_delete(&victim->lru);
+					if (victim->plan)
+					{
+						if (autoprepare_memory_limit)
+							autoprepare_used_memory -= victim->used_memory;
+						DropCachedPlan(victim->plan);
+					}
+					hash_search(autoprepare_hash, victim, HASH_REMOVE, NULL);
+					pfree(victim->key.query_string);
+					pfree(victim->key.param_types);
+					autoprepare_cached_plans -= 1;
+				}
+			}
+			dlist_insert_after(&autoprepare_lru.head, &autoprepare_stmt->lru); /* prepend entry to the head of LRU list */
+
+			/* Autoprepare statements also parsed in MessageContext */
+			MemoryContextSwitchTo(MessageContext);
+		}
+		else
+		{
+			/* Unnamed prepared statement --- release any prior unnamed stmt */
+			drop_unnamed_stmt();
+			/* Create context for parsing */
+			unnamed_stmt_context =
+				AllocSetContextCreate(MessageContext,
+									  "unnamed prepared statement",
+									  ALLOCSET_DEFAULT_SIZES);
+			oldcontext = MemoryContextSwitchTo(unnamed_stmt_context);
+			autoprepare_stmt = NULL;
+		}
 	}
 
 	/*
@@ -1539,13 +1791,21 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	}
 	else
 	{
-		/*
-		 * We just save the CachedPlanSource into unnamed_stmt_psrc.
-		 */
 		SaveCachedPlan(psrc);
-		unnamed_stmt_psrc = psrc;
+		if (autoprepare_stmt)
+		{
+			autoprepare_stmt->plan = psrc;
+			autoprepare_stmt->used_memory = CachedPlanMemoryUsage(autoprepare_stmt->plan);
+			autoprepare_used_memory += autoprepare_stmt->used_memory;
+		}
+		else
+			/*
+			 * We just save the CachedPlanSource into unnamed_stmt_psrc.
+			 */
+			unnamed_stmt_psrc = psrc;
 	}
 
+  ParseCompleted:
 	MemoryContextSwitchTo(oldcontext);
 
 	/*
@@ -1633,7 +1893,7 @@ exec_bind_message(StringInfo input_message)
 	else
 	{
 		/* special-case the unnamed statement */
-		psrc = unnamed_stmt_psrc;
+		psrc = autoprepare_stmt ? autoprepare_stmt->plan : unnamed_stmt_psrc;
 		if (!psrc)
 			ereport(ERROR,
 					(errcode(ERRCODE_UNDEFINED_PSTATEMENT),
@@ -2444,7 +2704,7 @@ exec_describe_statement_message(const char *stmt_name)
 	else
 	{
 		/* special-case the unnamed statement */
-		psrc = unnamed_stmt_psrc;
+		psrc = autoprepare_stmt ? autoprepare_stmt->plan : unnamed_stmt_psrc;
 		if (!psrc)
 			ereport(ERROR,
 					(errcode(ERRCODE_UNDEFINED_PSTATEMENT),
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 7f6f0b6..e0ab57f 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -679,10 +679,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -966,6 +968,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index f09e3a9..6168eee 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -113,6 +113,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -646,6 +647,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c
index abc3062..93225a5 100644
--- a/src/backend/utils/cache/plancache.c
+++ b/src/backend/utils/cache/plancache.c
@@ -2022,3 +2022,39 @@ ResetPlanCache(void)
 		cexpr->is_valid = false;
 	}
 }
+
+/*
+ * CachedPlanMemUsage: Return memory used by the CachedPlanSource
+ *
+ * Returns the malloced memory used by the two MemoryContexts in
+ * CachedPlanSource and (if available) the MemoryContext in the generic plan.
+ * Does not care for the free memory in those MemoryContexts because it is very
+ * unlikely that it is reused for anythink else anymore and can be considered
+ * dead memory anyway. Also the size of the CachedPlanSource struct is added.
+ *
+ * This function is used only for the pg_prepared_statements view to allow
+ * client applications to monitor memory used by prepared statements and to
+ * selects candidates for eviction in memory contraint environments with
+ * automatic preparation of often called queries.
+ */
+Size
+CachedPlanMemoryUsage(CachedPlanSource *plan)
+{
+	MemoryContextCounters counters;
+	MemoryContext context;
+
+	counters.totalspace = 0;
+
+	context = plan->context;
+	context->methods->stats(context,NULL,NULL,&counters);
+
+	context = plan->query_context;
+	context->methods->stats(context,NULL,NULL,&counters);
+
+	if( plan->gplan ) {
+		context = plan->gplan->context;
+		context->methods->stats(context,NULL,NULL,&counters);
+	}
+
+	return counters.totalspace;
+}
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index eb78522..068ada5 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -533,6 +533,9 @@ int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 int			tcp_user_timeout;
 
+int			autoprepare_limit;
+int			autoprepare_memory_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2267,6 +2270,31 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Limits for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 disable autoprepare, -1 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		0, -1, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_memory_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal size of memory used by autoprepared queries."),
+		 gettext_noop("0 disables autoprepare, -1 means that there is no memory limit. "
+					  "Calculating memory used by prepared queries adds some extra overhead, "
+					  "so non-zero value of this parameter may cause some slowdown. autoprepare_limit is much faster way to limit number of autoprepared statements"),
+		 GUC_UNIT_KB
+		},
+		&autoprepare_memory_limit,
+		0, -1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index b88e886..57254c7 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7611,6 +7611,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4035', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8,int8}',
+  proargmodes => '{o,o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count,used_memory}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index 2ce8324..11322ce 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 9e24cfa..db3e448 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 							 long count,
 							 DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index a93ed77..8a2718b 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -255,6 +255,9 @@ extern int	log_min_duration_statement;
 extern int	log_temp_files;
 extern double log_xact_sample_rate;
 
+extern int  autoprepare_limit;
+extern int  autoprepare_memory_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/include/utils/plancache.h b/src/include/utils/plancache.h
index de2555e..dc73e59 100644
--- a/src/include/utils/plancache.h
+++ b/src/include/utils/plancache.h
@@ -222,4 +222,6 @@ extern void ReleaseCachedPlan(CachedPlan *plan, bool useResOwner);
 extern CachedExpression *GetCachedExpression(Node *expr);
 extern void FreeCachedExpression(CachedExpression *cexpr);
 
+extern Size CachedPlanMemoryUsage(CachedPlanSource *plansource);
+
 #endif							/* PLANCACHE_H */
#101Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Konstantin Knizhnik (#100)
Re: [HACKERS] Cached plans and statement generalization

This patch fails the "rules" tests. Please fix.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

#102Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Alvaro Herrera (#101)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 25.09.2019 23:06, Alvaro Herrera wrote:

This patch fails the "rules" tests. Please fix.

Sorry,
New version of the patch with corrected expected output for rules test
is attached.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-extended-3.patchtext/x-patch; name=autoprepare-extended-3.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..c4d96e6
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres stores plans for all queries pass using extended protocol.
+    Speed of execution of autoprepared statements is almost the same as of explicitly prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_limit</varname> or <varname>autoprepare_memory_limit</varname>.
+    This variables specify limit for number of autoprepared statement or memory used by them.
+    Autoprepare is enabled if one of thme is non zero. Value -1 means unlimited.
+    Please notice that event autoprepare is anabled, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    Too large number of autoprepared statements can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). So using unlimited autoprepare hash is debgerous and not recommended.
+    Postgres is using LRU policy to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    Autoprepare hash is local to the backend. It is implicitely reseted on any change of database schema or
+    session variables.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals),
+    query execution counter and used memory.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 6612f95..a285f6d 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5339,6 +5339,39 @@ SELECT * FROM parent WHERE key = 2400;
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-autoprepare-limit" xreflabel="autoprepare_limit">
+       <term><varname>autoprepare_limit</varname> (<type>integer/type>)
+           <indexterm>
+             <primary><varname>autoprepare_limit</varname> configuration parameter</primary>
+           </indexterm>
+       </term>
+       <listitem>
+         <para>
+           Maximal number of autoprepared queries.
+           Zero value disables autoprepare, -1 means unlimited number of autoprepared queries.
+           Too large number of prepared queries can cause backend memory overflow and slowdown execution speed
+           (because of increased lookup time). Default value is <literal>0</literal>.
+         </para>
+       </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-memory-limit" xreflabel="autoprepare_memory_limit">
+       <term><varname>autoprepare_memory_limit</varname> (<type>integer/type>)
+           <indexterm>
+             <primary><varname>autoprepare_memory_limit</varname> configuration parameter</primary>
+           </indexterm>
+       </term>
+       <listitem>
+         <para>
+           Maximal size of memory used by autoprepared queries.
+           Zero value disables autoprepare, -1 means means that there is no memory limit.
+           Default value is <literal>0</literal>. Calculating memory used by prepared queries adds some extra overhead,
+           so positive value of this parameter may cause some slowdown.
+           <varname>autoprepare_limit</varname> is much faster way to limit number of autoprepared statements.
+         </para>
+       </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
    </sect1>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..4bd9f31 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index e59cba7..a479f36 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 9fe4a47..7e06389 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -332,6 +332,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
     l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index 7e0a041..795b1e5 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 									const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -787,7 +786,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index e8d8e6f..423c7f7 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -71,6 +71,8 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
@@ -171,6 +173,8 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -197,6 +201,163 @@ static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
 
+/* ----------------
+ *		autoprepare stuff
+ * ----------------
+ */
+typedef struct
+{
+	char* query_string;
+	Oid*  param_types;
+	int   num_params;
+} AutoprepareStatementKey;
+
+typedef struct
+{
+	AutoprepareStatementKey key;
+	CachedPlanSource* plan;
+	dlist_node		  lru;		   /* double linked list to implement LRU */
+	int64			  exec_count;  /* counter of execution of this query */
+	Size              used_memory; /* memory used by prepared statement */
+}  AutoprepareStatement;
+
+static AutoprepareStatement* autoprepare_stmt;
+static HTAB* autoprepare_hash;
+static dlist_head autoprepare_lru;
+static MemoryContext autoprepare_context;
+
+#define AUTOPREPARE_HASH_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+static size_t autoprepare_cached_plans;
+static size_t autoprepare_used_memory;
+
+static uint32 autoprepare_hash_fn(const void *key, Size keysize)
+{
+	uint32 hash = 0;
+	int i;
+	AutoprepareStatementKey* ask = (AutoprepareStatementKey*)key;
+	for (i = 0; i < ask->num_params; i++)
+		hash ^= ask->param_types[i];
+	return string_hash(ask->query_string, strlen(ask->query_string)) ^ hash;
+}
+
+static int autoprepare_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	AutoprepareStatementKey* ask1 = (AutoprepareStatementKey*)key1;
+	AutoprepareStatementKey* ask2 = (AutoprepareStatementKey*)key2;
+	if (ask1->num_params != ask2->num_params)
+		return 1;
+	if (memcmp(ask1->param_types, ask2->param_types, ask1->num_params*sizeof(Oid)) != 0)
+		return 1;
+
+	return strcmp(ask1->query_string, ask2->query_string);
+}
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(4);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "used_memory",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (autoprepare_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		AutoprepareStatement *autoprep_stmt;
+
+		hash_seq_init(&hash_seq, autoprepare_hash);
+		while ((autoprep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (autoprep_stmt->plan != NULL)
+			{
+				Datum		values[4];
+				bool		nulls[4];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(autoprep_stmt->key.query_string);
+				values[1] = build_regtype_array(autoprep_stmt->key.param_types,
+												autoprep_stmt->key.num_params);
+				values[2] = Int64GetDatum(autoprep_stmt->exec_count);
+				values[3] = Int64GetDatum(autoprep_stmt->used_memory);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
+
+void ResetAutoprepareCache(void)
+{
+	if (autoprepare_hash != NULL)
+	{
+		hash_destroy(autoprepare_hash);
+		MemoryContextReset(autoprepare_context);
+		dlist_init(&autoprepare_lru);
+		autoprepare_cached_plans = 0;
+		autoprepare_used_memory = 0;
+		autoprepare_hash = 0;
+	}
+}
+
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -1393,14 +1554,105 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	}
 	else
 	{
-		/* Unnamed prepared statement --- release any prior unnamed stmt */
-		drop_unnamed_stmt();
-		/* Create context for parsing */
-		unnamed_stmt_context =
-			AllocSetContextCreate(MessageContext,
-								  "unnamed prepared statement",
-								  ALLOCSET_DEFAULT_SIZES);
-		oldcontext = MemoryContextSwitchTo(unnamed_stmt_context);
+		/* check if autoprepare is enabled */
+		if (autoprepare_limit || autoprepare_memory_limit)
+		{
+			bool found;
+			AutoprepareStatementKey key;
+			key.query_string = (char*)query_string;
+			key.param_types = paramTypes;
+			key.num_params = numParams;
+
+			/*
+			 * Construct autoprepare context if not constructed yet.
+			 */
+			if (!autoprepare_context)
+				autoprepare_context = AllocSetContextCreate(CacheMemoryContext,
+														   "autoprepare context",
+														   ALLOCSET_DEFAULT_SIZES);
+
+			oldcontext = MemoryContextSwitchTo(autoprepare_context);
+
+			if (autoprepare_hash == NULL) /*initialize hash if not done yet */
+			{
+				static HASHCTL info;
+				info.keysize = sizeof(AutoprepareStatementKey);
+				info.entrysize = sizeof(AutoprepareStatement);
+				info.hash = autoprepare_hash_fn;
+				info.match = autoprepare_match_fn;
+				autoprepare_hash = hash_create("autoprepare_hash", autoprepare_limit != 0 ? autoprepare_limit : AUTOPREPARE_HASH_SIZE,
+											   &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+				dlist_init(&autoprepare_lru);
+			}
+			autoprepare_stmt = (AutoprepareStatement*)hash_search(autoprepare_hash, &key, HASH_ENTER, &found);
+			if (found)
+			{
+				autoprepare_stmt->exec_count += 1;
+				dlist_delete(&autoprepare_stmt->lru); /* accessed entry will be moved to the head of LRU list */
+				if (autoprepare_stmt->plan != NULL && !autoprepare_stmt->plan->is_valid)
+				{
+					/* Drop invalidated plan: it will be reconstructed later */
+					if (autoprepare_memory_limit)
+						autoprepare_used_memory -= autoprepare_stmt->used_memory;
+					DropCachedPlan(autoprepare_stmt->plan);
+					autoprepare_stmt->plan = NULL;
+					autoprepare_stmt->used_memory = 0;
+				}
+				else
+				{
+					dlist_insert_after(&autoprepare_lru.head, &autoprepare_stmt->lru); /* prepend entry to the head of LRU list */
+					goto ParseCompleted;
+				}
+			}
+			else
+			{
+				/* Initialize autoprepare hash entry */
+				autoprepare_cached_plans += 1;
+				autoprepare_stmt->plan = NULL;
+				autoprepare_stmt->used_memory = 0;
+				autoprepare_stmt->exec_count = 1;
+
+				/* Copy query text and parameter type info to CacheMemoryContext */
+				autoprepare_stmt->key.query_string = pstrdup(query_string);
+				autoprepare_stmt->key.param_types = (Oid*)palloc(sizeof(Oid)*numParams);
+				memcpy(autoprepare_stmt->key.param_types, paramTypes, sizeof(Oid)*numParams);
+
+				/* LRU eviction policy */
+				while ((autoprepare_limit > 0 && autoprepare_cached_plans > autoprepare_limit)
+					   || (autoprepare_memory_limit > 0 && autoprepare_used_memory > (size_t)autoprepare_memory_limit*1024))
+				{
+					/* Drop least recently accessed query */
+					AutoprepareStatement* victim = dlist_container(AutoprepareStatement, lru, autoprepare_lru.head.prev);
+					dlist_delete(&victim->lru);
+					if (victim->plan)
+					{
+						if (autoprepare_memory_limit)
+							autoprepare_used_memory -= victim->used_memory;
+						DropCachedPlan(victim->plan);
+					}
+					hash_search(autoprepare_hash, victim, HASH_REMOVE, NULL);
+					pfree(victim->key.query_string);
+					pfree(victim->key.param_types);
+					autoprepare_cached_plans -= 1;
+				}
+			}
+			dlist_insert_after(&autoprepare_lru.head, &autoprepare_stmt->lru); /* prepend entry to the head of LRU list */
+
+			/* Autoprepare statements also parsed in MessageContext */
+			MemoryContextSwitchTo(MessageContext);
+		}
+		else
+		{
+			/* Unnamed prepared statement --- release any prior unnamed stmt */
+			drop_unnamed_stmt();
+			/* Create context for parsing */
+			unnamed_stmt_context =
+				AllocSetContextCreate(MessageContext,
+									  "unnamed prepared statement",
+									  ALLOCSET_DEFAULT_SIZES);
+			oldcontext = MemoryContextSwitchTo(unnamed_stmt_context);
+			autoprepare_stmt = NULL;
+		}
 	}
 
 	/*
@@ -1539,13 +1791,21 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	}
 	else
 	{
-		/*
-		 * We just save the CachedPlanSource into unnamed_stmt_psrc.
-		 */
 		SaveCachedPlan(psrc);
-		unnamed_stmt_psrc = psrc;
+		if (autoprepare_stmt)
+		{
+			autoprepare_stmt->plan = psrc;
+			autoprepare_stmt->used_memory = CachedPlanMemoryUsage(autoprepare_stmt->plan);
+			autoprepare_used_memory += autoprepare_stmt->used_memory;
+		}
+		else
+			/*
+			 * We just save the CachedPlanSource into unnamed_stmt_psrc.
+			 */
+			unnamed_stmt_psrc = psrc;
 	}
 
+  ParseCompleted:
 	MemoryContextSwitchTo(oldcontext);
 
 	/*
@@ -1633,7 +1893,7 @@ exec_bind_message(StringInfo input_message)
 	else
 	{
 		/* special-case the unnamed statement */
-		psrc = unnamed_stmt_psrc;
+		psrc = autoprepare_stmt ? autoprepare_stmt->plan : unnamed_stmt_psrc;
 		if (!psrc)
 			ereport(ERROR,
 					(errcode(ERRCODE_UNDEFINED_PSTATEMENT),
@@ -2444,7 +2704,7 @@ exec_describe_statement_message(const char *stmt_name)
 	else
 	{
 		/* special-case the unnamed statement */
-		psrc = unnamed_stmt_psrc;
+		psrc = autoprepare_stmt ? autoprepare_stmt->plan : unnamed_stmt_psrc;
 		if (!psrc)
 			ereport(ERROR,
 					(errcode(ERRCODE_UNDEFINED_PSTATEMENT),
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index c6faa66..27f1c80 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -679,10 +679,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -966,6 +968,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index f09e3a9..6168eee 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -113,6 +113,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -646,6 +647,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c
index abc3062..93225a5 100644
--- a/src/backend/utils/cache/plancache.c
+++ b/src/backend/utils/cache/plancache.c
@@ -2022,3 +2022,39 @@ ResetPlanCache(void)
 		cexpr->is_valid = false;
 	}
 }
+
+/*
+ * CachedPlanMemUsage: Return memory used by the CachedPlanSource
+ *
+ * Returns the malloced memory used by the two MemoryContexts in
+ * CachedPlanSource and (if available) the MemoryContext in the generic plan.
+ * Does not care for the free memory in those MemoryContexts because it is very
+ * unlikely that it is reused for anythink else anymore and can be considered
+ * dead memory anyway. Also the size of the CachedPlanSource struct is added.
+ *
+ * This function is used only for the pg_prepared_statements view to allow
+ * client applications to monitor memory used by prepared statements and to
+ * selects candidates for eviction in memory contraint environments with
+ * automatic preparation of often called queries.
+ */
+Size
+CachedPlanMemoryUsage(CachedPlanSource *plan)
+{
+	MemoryContextCounters counters;
+	MemoryContext context;
+
+	counters.totalspace = 0;
+
+	context = plan->context;
+	context->methods->stats(context,NULL,NULL,&counters);
+
+	context = plan->query_context;
+	context->methods->stats(context,NULL,NULL,&counters);
+
+	if( plan->gplan ) {
+		context = plan->gplan->context;
+		context->methods->stats(context,NULL,NULL,&counters);
+	}
+
+	return counters.totalspace;
+}
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 2178e1c..a00cc23 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -533,6 +533,9 @@ int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 int			tcp_user_timeout;
 
+int			autoprepare_limit;
+int			autoprepare_memory_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2267,6 +2270,31 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Limits for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 disable autoprepare, -1 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		0, -1, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_memory_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal size of memory used by autoprepared queries."),
+		 gettext_noop("0 disables autoprepare, -1 means that there is no memory limit. "
+					  "Calculating memory used by prepared queries adds some extra overhead, "
+					  "so non-zero value of this parameter may cause some slowdown. autoprepare_limit is much faster way to limit number of autoprepared statements"),
+		 GUC_UNIT_KB
+		},
+		&autoprepare_memory_limit,
+		0, -1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 58ea5b9..c2a7e2b 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7617,6 +7617,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4035', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8,int8}',
+  proargmodes => '{o,o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count,used_memory}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index 2ce8324..11322ce 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 9e24cfa..db3e448 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 							 long count,
 							 DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 6791e0c..a8b80e7 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -255,6 +255,9 @@ extern int	log_min_duration_statement;
 extern int	log_temp_files;
 extern double log_xact_sample_rate;
 
+extern int  autoprepare_limit;
+extern int  autoprepare_memory_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/include/utils/plancache.h b/src/include/utils/plancache.h
index de2555e..dc73e59 100644
--- a/src/include/utils/plancache.h
+++ b/src/include/utils/plancache.h
@@ -222,4 +222,6 @@ extern void ReleaseCachedPlan(CachedPlan *plan, bool useResOwner);
 extern CachedExpression *GetCachedExpression(Node *expr);
 extern void FreeCachedExpression(CachedExpression *cexpr);
 
+extern Size CachedPlanMemoryUsage(CachedPlanSource *plansource);
+
 #endif							/* PLANCACHE_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 210e9cd..4cb4fa3 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1307,6 +1307,11 @@ mvtest_tvv| SELECT sum(mvtest_tv.totamt) AS grandtot
    FROM mvtest_tv;
 mvtest_tvvmv| SELECT mvtest_tvvm.grandtot
    FROM mvtest_tvvm;
+pg_autoprepared_statements| SELECT p.statement,
+    p.parameter_types,
+    p.exec_count,
+    p.used_memory
+   FROM pg_autoprepared_statement() p(statement, parameter_types, exec_count, used_memory);
 pg_available_extension_versions| SELECT e.name,
     e.version,
     (x.extname IS NOT NULL) AS installed,
#103Michael Paquier
michael@paquier.xyz
In reply to: Konstantin Knizhnik (#102)
Re: [HACKERS] Cached plans and statement generalization

On Thu, Sep 26, 2019 at 10:23:38AM +0300, Konstantin Knizhnik wrote:

Sorry,
New version of the patch with corrected expected output for rules test is
attached.

It looks like the documentation is failing to build. Could you fix
that? There may be other issues as well. I have moved the patch to
next CF, waiting on author.
--
Michael

#104Konstantin Knizhnik
k.knizhnik@postgrespro.ru
In reply to: Michael Paquier (#103)
1 attachment(s)
Re: [HACKERS] Cached plans and statement generalization

On 01.12.2019 6:26, Michael Paquier wrote:

On Thu, Sep 26, 2019 at 10:23:38AM +0300, Konstantin Knizhnik wrote:

Sorry,
New version of the patch with corrected expected output for rules test is
attached.

It looks like the documentation is failing to build. Could you fix
that? There may be other issues as well. I have moved the patch to
next CF, waiting on author.
--
Michael

Sorry, issues with documentation are fixed.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Attachments:

autoprepare-extended-4.patchtext/x-patch; name=autoprepare-extended-4.patchDownload
diff --git a/doc/src/sgml/autoprepare.sgml b/doc/src/sgml/autoprepare.sgml
new file mode 100644
index 0000000..c4d96e6
--- /dev/null
+++ b/doc/src/sgml/autoprepare.sgml
@@ -0,0 +1,62 @@
+<!-- doc/src/sgml/autoprepare.sgml -->
+
+ <chapter id="autoprepare">
+  <title>Autoprepared statements</title>
+
+  <indexterm zone="autoprepare">
+   <primary>autoprepared statements</primary>
+  </indexterm>
+
+  <para>
+    <productname>PostgreSQL</productname> makes it possible <firstterm>prepare</firstterm>
+    frequently used statements to eliminate cost of their compilation
+    and optimization on each execution of the query. On simple queries
+    (like ones in <filename>pgbench -S</filename>) using prepared statements
+    increase performance more than two times.
+  </para>
+
+  <para>
+    Unfortunately not all database applications are using prepared statements
+    and, moreover, it is not always possible. For example, in case of using
+    <productname>pgbouncer</productname> or any other session pooler,
+    there is no session state (transactions of one client may be executed at different
+    backends) and so prepared statements can not be used.
+  </para>
+
+  <para>
+    Autoprepare mode allows to overcome this limitation.
+    In this mode Postgres stores plans for all queries pass using extended protocol.
+    Speed of execution of autoprepared statements is almost the same as of explicitly prepared statements.
+  </para>
+
+  <para>
+    By default autoprepare mode is switched off. To enable it, assign non-zero
+    value to GUC variable <varname>autoprepare_limit</varname> or <varname>autoprepare_memory_limit</varname>.
+    This variables specify limit for number of autoprepared statement or memory used by them.
+    Autoprepare is enabled if one of thme is non zero. Value -1 means unlimited.
+    Please notice that event autoprepare is anabled, Postgres makes a decision about using
+    generalized plan vs. customized execution plans based on the results
+    of comparison of average time of five customized plans with
+    time of generalized plan.
+  </para>
+
+  <para>
+    Too large number of autoprepared statements can cause memory overflow
+    (especially if there are many active clients, because prepared statements cache
+    is local to the backend). So using unlimited autoprepare hash is debgerous and not recommended.
+    Postgres is using LRU policy to keep in memory most frequently used queries.
+  </para>
+
+  <para>
+    Autoprepare hash is local to the backend. It is implicitely reseted on any change of database schema or
+    session variables.
+  </para>
+
+  <para>
+    It is possible to inspect autoprepared queries in the backend using
+    <literal>pg_autoprepared_statements</literal> view. It shows original text of the
+    query, types of the extracted parameters (replacing literals),
+    query execution counter and used memory.
+  </para>
+
+ </chapter>
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 4ec13f3..725e3a9 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5432,6 +5432,39 @@ SELECT * FROM parent WHERE key = 2400;
       </listitem>
      </varlistentry>
 
+     <varlistentry id="guc-autoprepare-limit" xreflabel="autoprepare_limit">
+       <term><varname>autoprepare_limit</varname> (<type>integer</type>)
+           <indexterm>
+             <primary><varname>autoprepare_limit</varname> configuration parameter</primary>
+           </indexterm>
+       </term>
+       <listitem>
+         <para>
+           Maximal number of autoprepared queries.
+           Zero value disables autoprepare, -1 means unlimited number of autoprepared queries.
+           Too large number of prepared queries can cause backend memory overflow and slowdown execution speed
+           (because of increased lookup time). Default value is <literal>0</literal>.
+         </para>
+       </listitem>
+     </varlistentry>
+
+     <varlistentry id="guc-autoprepare-memory-limit" xreflabel="autoprepare_memory_limit">
+       <term><varname>autoprepare_memory_limit</varname> (<type>integer</type>)
+           <indexterm>
+             <primary><varname>autoprepare_memory_limit</varname> configuration parameter</primary>
+           </indexterm>
+       </term>
+       <listitem>
+         <para>
+           Maximal size of memory used by autoprepared queries.
+           Zero value disables autoprepare, -1 means means that there is no memory limit.
+           Default value is <literal>0</literal>. Calculating memory used by prepared queries adds some extra overhead,
+           so positive value of this parameter may cause some slowdown.
+           <varname>autoprepare_limit</varname> is much faster way to limit number of autoprepared statements.
+         </para>
+       </listitem>
+     </varlistentry>
+
      </variablelist>
     </sect2>
    </sect1>
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index 3da2365..4bd9f31 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -22,6 +22,7 @@
 <!ENTITY json       SYSTEM "json.sgml">
 <!ENTITY mvcc       SYSTEM "mvcc.sgml">
 <!ENTITY parallel   SYSTEM "parallel.sgml">
+<!ENTITY autoprepare SYSTEM "autoprepare.sgml">
 <!ENTITY perform    SYSTEM "perform.sgml">
 <!ENTITY queries    SYSTEM "queries.sgml">
 <!ENTITY rangetypes SYSTEM "rangetypes.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index e59cba7..a479f36 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -109,6 +109,7 @@
   &mvcc;
   &perform;
   &parallel;
+  &autoprepare;
 
  </part>
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index f7800f0..000cff3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -332,6 +332,9 @@ CREATE VIEW pg_prepared_xacts AS
 CREATE VIEW pg_prepared_statements AS
     SELECT * FROM pg_prepared_statement() AS P;
 
+CREATE VIEW pg_autoprepared_statements AS
+    SELECT * FROM pg_autoprepared_statement() AS P;
+
 CREATE VIEW pg_seclabels AS
 SELECT
     l.objoid, l.classoid, l.objsubid,
diff --git a/src/backend/commands/prepare.c b/src/backend/commands/prepare.c
index 47bae95..6aa528b 100644
--- a/src/backend/commands/prepare.c
+++ b/src/backend/commands/prepare.c
@@ -48,7 +48,6 @@ static HTAB *prepared_queries = NULL;
 static void InitQueryHashTable(void);
 static ParamListInfo EvaluateParams(PreparedStatement *pstmt, List *params,
 									const char *queryString, EState *estate);
-static Datum build_regtype_array(Oid *param_types, int num_params);
 
 /*
  * Implements the 'PREPARE' utility statement.
@@ -788,7 +787,7 @@ pg_prepared_statement(PG_FUNCTION_ARGS)
  * pointing to a one-dimensional Postgres array of regtypes. An empty
  * array is returned as a zero-element array, not NULL.
  */
-static Datum
+Datum
 build_regtype_array(Oid *param_types, int num_params)
 {
 	Datum	   *tmp_ary;
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 3b85e48..00d6298 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -72,6 +72,8 @@
 #include "tcop/pquery.h"
 #include "tcop/tcopprot.h"
 #include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/fmgrprotos.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/ps_status.h"
@@ -165,6 +167,8 @@ static ProcSignalReason RecoveryConflictReason;
 static MemoryContext row_description_context = NULL;
 static StringInfoData row_description_buf;
 
+
+
 /* ----------------------------------------------------------------
  *		decls for routines only used in this file
  * ----------------------------------------------------------------
@@ -191,6 +195,163 @@ static void enable_statement_timeout(void);
 static void disable_statement_timeout(void);
 
 
+/* ----------------
+ *		autoprepare stuff
+ * ----------------
+ */
+typedef struct
+{
+	char* query_string;
+	Oid*  param_types;
+	int   num_params;
+} AutoprepareStatementKey;
+
+typedef struct
+{
+	AutoprepareStatementKey key;
+	CachedPlanSource* plan;
+	dlist_node		  lru;		   /* double linked list to implement LRU */
+	int64			  exec_count;  /* counter of execution of this query */
+	Size              used_memory; /* memory used by prepared statement */
+}  AutoprepareStatement;
+
+static AutoprepareStatement* autoprepare_stmt;
+static HTAB* autoprepare_hash;
+static dlist_head autoprepare_lru;
+static MemoryContext autoprepare_context;
+
+#define AUTOPREPARE_HASH_SIZE 113
+
+/*
+ * Plan cache access statistic
+ */
+static size_t autoprepare_cached_plans;
+static size_t autoprepare_used_memory;
+
+static uint32 autoprepare_hash_fn(const void *key, Size keysize)
+{
+	uint32 hash = 0;
+	int i;
+	AutoprepareStatementKey* ask = (AutoprepareStatementKey*)key;
+	for (i = 0; i < ask->num_params; i++)
+		hash ^= ask->param_types[i];
+	return string_hash(ask->query_string, strlen(ask->query_string)) ^ hash;
+}
+
+static int autoprepare_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	AutoprepareStatementKey* ask1 = (AutoprepareStatementKey*)key1;
+	AutoprepareStatementKey* ask2 = (AutoprepareStatementKey*)key2;
+	if (ask1->num_params != ask2->num_params)
+		return 1;
+	if (memcmp(ask1->param_types, ask2->param_types, ask1->num_params*sizeof(Oid)) != 0)
+		return 1;
+
+	return strcmp(ask1->query_string, ask2->query_string);
+}
+
+/*
+ * This set returning function reads all the autoprepared statements and
+ * returns a set of (name, statement, prepare_time, param_types, from_sql).
+ */
+Datum
+pg_autoprepared_statement(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* need to build tuplestore in query context */
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	/*
+	 * build tupdesc for result tuples. This must match the definition of the
+	 * pg_prepared_statements view in system_views.sql
+	 */
+	tupdesc = CreateTemplateTupleDesc(4);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 1, "statement",
+					   TEXTOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 2, "parameter_types",
+					   REGTYPEARRAYOID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 3, "exec_count",
+					   INT8OID, -1, 0);
+	TupleDescInitEntry(tupdesc, (AttrNumber) 4, "used_memory",
+					   INT8OID, -1, 0);
+
+	/*
+	 * We put all the tuples into a tuplestore in one scan of the hashtable.
+	 * This avoids any issue of the hashtable possibly changing between calls.
+	 */
+	tupstore =
+		tuplestore_begin_heap(rsinfo->allowedModes & SFRM_Materialize_Random,
+							  false, work_mem);
+
+	/* generate junk in short-term context */
+	MemoryContextSwitchTo(oldcontext);
+
+	/* hash table might be uninitialized */
+	if (autoprepare_hash)
+	{
+		HASH_SEQ_STATUS hash_seq;
+		AutoprepareStatement *autoprep_stmt;
+
+		hash_seq_init(&hash_seq, autoprepare_hash);
+		while ((autoprep_stmt = hash_seq_search(&hash_seq)) != NULL)
+		{
+			if (autoprep_stmt->plan != NULL)
+			{
+				Datum		values[4];
+				bool		nulls[4];
+				MemSet(nulls, 0, sizeof(nulls));
+
+				values[0] = CStringGetTextDatum(autoprep_stmt->key.query_string);
+				values[1] = build_regtype_array(autoprep_stmt->key.param_types,
+												autoprep_stmt->key.num_params);
+				values[2] = Int64GetDatum(autoprep_stmt->exec_count);
+				values[3] = Int64GetDatum(autoprep_stmt->used_memory);
+
+				tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+			}
+		}
+	}
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	return (Datum) 0;
+}
+
+void ResetAutoprepareCache(void)
+{
+	if (autoprepare_hash != NULL)
+	{
+		hash_destroy(autoprepare_hash);
+		MemoryContextReset(autoprepare_context);
+		dlist_init(&autoprepare_lru);
+		autoprepare_cached_plans = 0;
+		autoprepare_used_memory = 0;
+		autoprepare_hash = 0;
+	}
+}
+
 /* ----------------------------------------------------------------
  *		routines to obtain user input
  * ----------------------------------------------------------------
@@ -1394,14 +1555,105 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	}
 	else
 	{
-		/* Unnamed prepared statement --- release any prior unnamed stmt */
-		drop_unnamed_stmt();
-		/* Create context for parsing */
-		unnamed_stmt_context =
-			AllocSetContextCreate(MessageContext,
-								  "unnamed prepared statement",
-								  ALLOCSET_DEFAULT_SIZES);
-		oldcontext = MemoryContextSwitchTo(unnamed_stmt_context);
+		/* check if autoprepare is enabled */
+		if (autoprepare_limit || autoprepare_memory_limit)
+		{
+			bool found;
+			AutoprepareStatementKey key;
+			key.query_string = (char*)query_string;
+			key.param_types = paramTypes;
+			key.num_params = numParams;
+
+			/*
+			 * Construct autoprepare context if not constructed yet.
+			 */
+			if (!autoprepare_context)
+				autoprepare_context = AllocSetContextCreate(CacheMemoryContext,
+														   "autoprepare context",
+														   ALLOCSET_DEFAULT_SIZES);
+
+			oldcontext = MemoryContextSwitchTo(autoprepare_context);
+
+			if (autoprepare_hash == NULL) /*initialize hash if not done yet */
+			{
+				static HASHCTL info;
+				info.keysize = sizeof(AutoprepareStatementKey);
+				info.entrysize = sizeof(AutoprepareStatement);
+				info.hash = autoprepare_hash_fn;
+				info.match = autoprepare_match_fn;
+				autoprepare_hash = hash_create("autoprepare_hash", autoprepare_limit != 0 ? autoprepare_limit : AUTOPREPARE_HASH_SIZE,
+											   &info, HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+				dlist_init(&autoprepare_lru);
+			}
+			autoprepare_stmt = (AutoprepareStatement*)hash_search(autoprepare_hash, &key, HASH_ENTER, &found);
+			if (found)
+			{
+				autoprepare_stmt->exec_count += 1;
+				dlist_delete(&autoprepare_stmt->lru); /* accessed entry will be moved to the head of LRU list */
+				if (autoprepare_stmt->plan != NULL && !autoprepare_stmt->plan->is_valid)
+				{
+					/* Drop invalidated plan: it will be reconstructed later */
+					if (autoprepare_memory_limit)
+						autoprepare_used_memory -= autoprepare_stmt->used_memory;
+					DropCachedPlan(autoprepare_stmt->plan);
+					autoprepare_stmt->plan = NULL;
+					autoprepare_stmt->used_memory = 0;
+				}
+				else
+				{
+					dlist_insert_after(&autoprepare_lru.head, &autoprepare_stmt->lru); /* prepend entry to the head of LRU list */
+					goto ParseCompleted;
+				}
+			}
+			else
+			{
+				/* Initialize autoprepare hash entry */
+				autoprepare_cached_plans += 1;
+				autoprepare_stmt->plan = NULL;
+				autoprepare_stmt->used_memory = 0;
+				autoprepare_stmt->exec_count = 1;
+
+				/* Copy query text and parameter type info to CacheMemoryContext */
+				autoprepare_stmt->key.query_string = pstrdup(query_string);
+				autoprepare_stmt->key.param_types = (Oid*)palloc(sizeof(Oid)*numParams);
+				memcpy(autoprepare_stmt->key.param_types, paramTypes, sizeof(Oid)*numParams);
+
+				/* LRU eviction policy */
+				while ((autoprepare_limit > 0 && autoprepare_cached_plans > autoprepare_limit)
+					   || (autoprepare_memory_limit > 0 && autoprepare_used_memory > (size_t)autoprepare_memory_limit*1024))
+				{
+					/* Drop least recently accessed query */
+					AutoprepareStatement* victim = dlist_container(AutoprepareStatement, lru, autoprepare_lru.head.prev);
+					dlist_delete(&victim->lru);
+					if (victim->plan)
+					{
+						if (autoprepare_memory_limit)
+							autoprepare_used_memory -= victim->used_memory;
+						DropCachedPlan(victim->plan);
+					}
+					hash_search(autoprepare_hash, victim, HASH_REMOVE, NULL);
+					pfree(victim->key.query_string);
+					pfree(victim->key.param_types);
+					autoprepare_cached_plans -= 1;
+				}
+			}
+			dlist_insert_after(&autoprepare_lru.head, &autoprepare_stmt->lru); /* prepend entry to the head of LRU list */
+
+			/* Autoprepare statements also parsed in MessageContext */
+			MemoryContextSwitchTo(MessageContext);
+		}
+		else
+		{
+			/* Unnamed prepared statement --- release any prior unnamed stmt */
+			drop_unnamed_stmt();
+			/* Create context for parsing */
+			unnamed_stmt_context =
+				AllocSetContextCreate(MessageContext,
+									  "unnamed prepared statement",
+									  ALLOCSET_DEFAULT_SIZES);
+			oldcontext = MemoryContextSwitchTo(unnamed_stmt_context);
+			autoprepare_stmt = NULL;
+		}
 	}
 
 	/*
@@ -1540,13 +1792,21 @@ exec_parse_message(const char *query_string,	/* string to execute */
 	}
 	else
 	{
-		/*
-		 * We just save the CachedPlanSource into unnamed_stmt_psrc.
-		 */
 		SaveCachedPlan(psrc);
-		unnamed_stmt_psrc = psrc;
+		if (autoprepare_stmt)
+		{
+			autoprepare_stmt->plan = psrc;
+			autoprepare_stmt->used_memory = CachedPlanMemoryUsage(autoprepare_stmt->plan);
+			autoprepare_used_memory += autoprepare_stmt->used_memory;
+		}
+		else
+			/*
+			 * We just save the CachedPlanSource into unnamed_stmt_psrc.
+			 */
+			unnamed_stmt_psrc = psrc;
 	}
 
+  ParseCompleted:
 	MemoryContextSwitchTo(oldcontext);
 
 	/*
@@ -1634,7 +1894,7 @@ exec_bind_message(StringInfo input_message)
 	else
 	{
 		/* special-case the unnamed statement */
-		psrc = unnamed_stmt_psrc;
+		psrc = autoprepare_stmt ? autoprepare_stmt->plan : unnamed_stmt_psrc;
 		if (!psrc)
 			ereport(ERROR,
 					(errcode(ERRCODE_UNDEFINED_PSTATEMENT),
@@ -2466,7 +2726,7 @@ exec_describe_statement_message(const char *stmt_name)
 	else
 	{
 		/* special-case the unnamed statement */
-		psrc = unnamed_stmt_psrc;
+		psrc = autoprepare_stmt ? autoprepare_stmt->plan : unnamed_stmt_psrc;
 		if (!psrc)
 			ereport(ERROR,
 					(errcode(ERRCODE_UNDEFINED_PSTATEMENT),
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 3a03ca7..75db5b1 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -674,10 +674,12 @@ standard_ProcessUtility(PlannedStmt *pstmt,
 		case T_AlterSystemStmt:
 			PreventInTransactionBlock(isTopLevel, "ALTER SYSTEM");
 			AlterSystemSetConfigFile((AlterSystemStmt *) parsetree);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableSetStmt:
 			ExecSetVariableStmt((VariableSetStmt *) parsetree, isTopLevel);
+			ResetAutoprepareCache();
 			break;
 
 		case T_VariableShowStmt:
@@ -961,6 +963,8 @@ ProcessUtilitySlow(ParseState *pstate,
 	/* All event trigger calls are done only when isCompleteQuery is true */
 	needCleanup = isCompleteQuery && EventTriggerBeginCompleteQuery();
 
+	ResetAutoprepareCache();
+
 	/* PG_TRY block is to ensure we call EventTriggerEndCompleteQuery */
 	PG_TRY();
 	{
diff --git a/src/backend/utils/cache/inval.c b/src/backend/utils/cache/inval.c
index f09e3a9..6168eee 100644
--- a/src/backend/utils/cache/inval.c
+++ b/src/backend/utils/cache/inval.c
@@ -113,6 +113,7 @@
 #include "utils/relmapper.h"
 #include "utils/snapmgr.h"
 #include "utils/syscache.h"
+#include "tcop/pquery.h"
 
 
 /*
@@ -646,6 +647,7 @@ InvalidateSystemCaches(void)
 
 	InvalidateCatalogSnapshot();
 	ResetCatalogCaches();
+	ResetAutoprepareCache();
 	RelationCacheInvalidate();	/* gets smgr and relmap too */
 
 	for (i = 0; i < syscache_callback_count; i++)
diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c
index abc3062..93225a5 100644
--- a/src/backend/utils/cache/plancache.c
+++ b/src/backend/utils/cache/plancache.c
@@ -2022,3 +2022,39 @@ ResetPlanCache(void)
 		cexpr->is_valid = false;
 	}
 }
+
+/*
+ * CachedPlanMemUsage: Return memory used by the CachedPlanSource
+ *
+ * Returns the malloced memory used by the two MemoryContexts in
+ * CachedPlanSource and (if available) the MemoryContext in the generic plan.
+ * Does not care for the free memory in those MemoryContexts because it is very
+ * unlikely that it is reused for anythink else anymore and can be considered
+ * dead memory anyway. Also the size of the CachedPlanSource struct is added.
+ *
+ * This function is used only for the pg_prepared_statements view to allow
+ * client applications to monitor memory used by prepared statements and to
+ * selects candidates for eviction in memory contraint environments with
+ * automatic preparation of often called queries.
+ */
+Size
+CachedPlanMemoryUsage(CachedPlanSource *plan)
+{
+	MemoryContextCounters counters;
+	MemoryContext context;
+
+	counters.totalspace = 0;
+
+	context = plan->context;
+	context->methods->stats(context,NULL,NULL,&counters);
+
+	context = plan->query_context;
+	context->methods->stats(context,NULL,NULL,&counters);
+
+	if( plan->gplan ) {
+		context = plan->gplan->context;
+		context->methods->stats(context,NULL,NULL,&counters);
+	}
+
+	return counters.totalspace;
+}
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 5fccc96..1db741e 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -540,6 +540,9 @@ int			tcp_keepalives_interval;
 int			tcp_keepalives_count;
 int			tcp_user_timeout;
 
+int			autoprepare_limit;
+int			autoprepare_memory_limit;
+
 /*
  * SSL renegotiation was been removed in PostgreSQL 9.5, but we tolerate it
  * being set to zero (meaning never renegotiate) for backward compatibility.
@@ -2286,6 +2289,31 @@ static struct config_int ConfigureNamesInt[] =
 		check_max_stack_depth, assign_max_stack_depth, NULL
 	},
 
+	/*
+	 * Limits for implicit preparing of frequently executed queries
+	 */
+	{
+		{"autoprepare_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal number of autoprepared queries."),
+		 gettext_noop("0 disable autoprepare, -1 means unlimited number of autoprepared queries. Too large number of prepared queries can cause backend memory overflow and slowdown execution speed (because of increased lookup time)")
+		},
+		&autoprepare_limit,
+		0, -1, INT_MAX,
+		NULL, NULL, NULL
+	},
+	{
+		{"autoprepare_memory_limit", PGC_USERSET, QUERY_TUNING_OTHER,
+		 gettext_noop("Maximal size of memory used by autoprepared queries."),
+		 gettext_noop("0 disables autoprepare, -1 means that there is no memory limit. "
+					  "Calculating memory used by prepared queries adds some extra overhead, "
+					  "so non-zero value of this parameter may cause some slowdown. autoprepare_limit is much faster way to limit number of autoprepared statements"),
+		 GUC_UNIT_KB
+		},
+		&autoprepare_memory_limit,
+		0, -1, INT_MAX,
+		NULL, NULL, NULL
+	},
+
 	{
 		{"temp_file_limit", PGC_SUSET, RESOURCES_DISK,
 			gettext_noop("Limits the total size of all temporary files used by each process."),
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index ac8f64b..b759b15 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -7617,6 +7617,13 @@
   proargmodes => '{o,o,o,o,o}',
   proargnames => '{name,statement,prepare_time,parameter_types,from_sql}',
   prosrc => 'pg_prepared_statement' },
+{ oid => '4035', descr => 'get the autoprepared statements for this session',
+  proname => 'pg_autoprepared_statement', prorows => '1000', proretset => 't',
+  provolatile => 's', proparallel => 'r', prorettype => 'record',
+  proargtypes => '', proallargtypes => '{text,_regtype,int8,int8}',
+  proargmodes => '{o,o,o,o}',
+  proargnames => '{statement,parameter_types,exec_count,used_memory}',
+  prosrc => 'pg_autoprepared_statement' },
 { oid => '2511', descr => 'get the open cursors for this session',
   proname => 'pg_cursor', prorows => '1000', proretset => 't',
   provolatile => 's', proparallel => 'r', prorettype => 'record',
diff --git a/src/include/commands/prepare.h b/src/include/commands/prepare.h
index 2ce8324..11322ce 100644
--- a/src/include/commands/prepare.h
+++ b/src/include/commands/prepare.h
@@ -56,5 +56,6 @@ extern TupleDesc FetchPreparedStatementResultDesc(PreparedStatement *stmt);
 extern List *FetchPreparedStatementTargetList(PreparedStatement *stmt);
 
 extern void DropAllPreparedStatements(void);
+extern Datum build_regtype_array(Oid *param_types, int num_params);
 
 #endif							/* PREPARE_H */
diff --git a/src/include/tcop/pquery.h b/src/include/tcop/pquery.h
index 9e24cfa..db3e448 100644
--- a/src/include/tcop/pquery.h
+++ b/src/include/tcop/pquery.h
@@ -42,4 +42,6 @@ extern uint64 PortalRunFetch(Portal portal,
 							 long count,
 							 DestReceiver *dest);
 
+extern void ResetAutoprepareCache(void);
+
 #endif							/* PQUERY_H */
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index 50098e6..f892f6e 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -259,6 +259,9 @@ extern double log_xact_sample_rate;
 extern char *backtrace_functions;
 extern char *backtrace_symbol_list;
 
+extern int  autoprepare_limit;
+extern int  autoprepare_memory_limit;
+
 extern int	temp_file_limit;
 
 extern int	num_temp_buffers;
diff --git a/src/include/utils/plancache.h b/src/include/utils/plancache.h
index de2555e..dc73e59 100644
--- a/src/include/utils/plancache.h
+++ b/src/include/utils/plancache.h
@@ -222,4 +222,6 @@ extern void ReleaseCachedPlan(CachedPlan *plan, bool useResOwner);
 extern CachedExpression *GetCachedExpression(Node *expr);
 extern void FreeCachedExpression(CachedExpression *cexpr);
 
+extern Size CachedPlanMemoryUsage(CachedPlanSource *plansource);
+
 #endif							/* PLANCACHE_H */
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index c9cc569..87da01b 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1307,6 +1307,11 @@ mvtest_tvv| SELECT sum(mvtest_tv.totamt) AS grandtot
    FROM mvtest_tv;
 mvtest_tvvmv| SELECT mvtest_tvvm.grandtot
    FROM mvtest_tvvm;
+pg_autoprepared_statements| SELECT p.statement,
+    p.parameter_types,
+    p.exec_count,
+    p.used_memory
+   FROM pg_autoprepared_statement() p(statement, parameter_types, exec_count, used_memory);
 pg_available_extension_versions| SELECT e.name,
     e.version,
     (x.extname IS NOT NULL) AS installed,
#105Tom Lane
tgl@sss.pgh.pa.us
In reply to: Konstantin Knizhnik (#104)
Re: [HACKERS] Cached plans and statement generalization

Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:

[ autoprepare-extended-4.patch ]

The cfbot is showing that this doesn't apply anymore; there's
some doubtless-trivial conflict in prepare.c.

However ... TBH I've been skeptical of this whole proposal from the
beginning, on the grounds that it would probably hurt more use-cases
than it helps. The latest approach doesn't really do anything to
assuage that fear, because now that you've restricted it to extended
query mode, the feature amounts to nothing more than deliberately
overriding the application's choice to use unnamed rather than named
(prepared) statements. How often is that going to be a good idea?
And when it is, wouldn't it be better to fix the app? The client is
likely to have a far better idea of which statements would benefit
from this treatment than the server will; and in this context,
the client-side changes needed would really be trivial. The original
proposal, scary as it was, at least supported the argument that you
could use it to improve applications that were too dumb/hoary to
parameterize commands for themselves.

I'm also unimpressed by benchmark testing that relies entirely on
pgbench's default scenario, because that scenario consists entirely
of queries that are perfectly adapted to plan-only-once treatment.
In the real world, we constantly see complaints about cases where the
plancache mistakenly decides that a generic plan is acceptable. I think
that extending that behavior to more cases is going to be a net loss,
until we find some way to make it smarter and more reliable. At the
very least, you need to show some worst-case numbers alongside these
best-case numbers --- what happens with queries where we conclude we
must replan every time, so that the plancache becomes pure overhead?

The pgbench test case is laughably unrealistic in another way, in that
there are so few distinct queries it issues, so that there's no
stress at all on the cache-management aspect of this problem.

In short, I really think we ought to reject this patch and move on.
Maybe it could be resurrected sometime in the future when we have a
better handle on when to cache plans and when not.

If you want to press forward with it anyway, certainly the lack of
any tests in this patch is another big objection. Perhaps you
could create a pgbench TAP script that exercises the logic.

regards, tom lane

#106Daniel Gustafsson
daniel@yesql.se
In reply to: Tom Lane (#105)
Re: [HACKERS] Cached plans and statement generalization

On 1 Mar 2020, at 20:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:

In short, I really think we ought to reject this patch and move on.
Maybe it could be resurrected sometime in the future when we have a
better handle on when to cache plans and when not.

If you want to press forward with it anyway, certainly the lack of
any tests in this patch is another big objection. Perhaps you
could create a pgbench TAP script that exercises the logic.

Based on Tom's review, and that nothing has been submitted since, I've marked
this entry as returned with feedback. Feel to open a new entry if you want to
address Tom's comments and take this further.

cheers ./daniel

#107Юрий Соколов
funny.falcon@gmail.com
In reply to: Tom Lane (#105)
Re: [HACKERS] Cached plans and statement generalization

вс, 1 мар. 2020 г. в 22:26, Tom Lane <tgl@sss.pgh.pa.us>:

Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:

[ autoprepare-extended-4.patch ]

The cfbot is showing that this doesn't apply anymore; there's
some doubtless-trivial conflict in prepare.c.

However ... TBH I've been skeptical of this whole proposal from the
beginning, on the grounds that it would probably hurt more use-cases
than it helps. The latest approach doesn't really do anything to
assuage that fear, because now that you've restricted it to extended
query mode, the feature amounts to nothing more than deliberately
overriding the application's choice to use unnamed rather than named
(prepared) statements. How often is that going to be a good idea?
And when it is, wouldn't it be better to fix the app? The client is
likely to have a far better idea of which statements would benefit
from this treatment than the server will; and in this context,
the client-side changes needed would really be trivial. The original
proposal, scary as it was, at least supported the argument that you
could use it to improve applications that were too dumb/hoary to
parameterize commands for themselves.

The theme of this thread:
- it is not possible to reliably "fix the app" when pgbouncer or internal
driver connection multiplexing are used.
- another widely spread case is frameworks (ORM or other).
There is no way to prepare a concrete query because it is buried under
levels of abstractions.

Whole "autoprepare" thing is a workaround for absence of "really trivial
client-side changes" in these conditions.

regards,
Yura