Core Extensions relocation

Started by Greg Smithover 14 years ago72 messages
#1Greg Smith
greg@2ndquadrant.com
1 attachment(s)

Following up on the idea we've been exploring for making some extensions
more prominent, attached is the first rev that I think may be worth
considering seriously. Main improvement from the last is that I
reorganized the docs to break out what I decided to tentatively name
"Core Extensions" into their own chapter. No longer mixed in with the
rest of the contrib modules, and I introduce them a bit differently.
If you want to take a quick look at the new page, I copied it to
http://www.2ndquadrant.us/docs/html/extensions.html

I'm not completely happy on the wordering there yet. The use of both
"modules" and "extensions" is probably worth eliminating, and maybe that
continues on to doing that against the language I swiped from the
contrib intro too. There's also a lot of shared text at the end there,
common wording from that and the contrib page about how to install and
migrate these extensions. Not sure how to refactor it out into another
section cleanly though.

Regression tests came up last time I posted this. Doesn't look like
there are any for the modules I'm suggesting should be promoted. Only
code issue I noticed during another self-review here is that I didn't
rename contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql cleanly, may
need to do that one over again to get the commits as clean as possible.

Updated code is at
https://github.com/greg2ndQuadrant/postgres/tree/move-contrib too, and
since this is painful as a patch the compare view at
https://github.com/greg2ndQuadrant/postgres/compare/master...move-contrib
will be easier for browsing the code changes.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

Attachments:

move-contrib-v3.patchtext/x-patch; name=move-contrib-v3.patchDownload
diff --git a/contrib/Makefile b/contrib/Makefile
index 6967767..bcb9465 100644
--- a/contrib/Makefile
+++ b/contrib/Makefile
@@ -7,7 +7,6 @@ include $(top_builddir)/src/Makefile.global
 SUBDIRS = \
 		adminpack	\
 		auth_delay	\
-		auto_explain	\
 		btree_gin	\
 		btree_gist	\
 		chkpass		\
@@ -27,21 +26,15 @@ SUBDIRS = \
 		lo		\
 		ltree		\
 		oid2name	\
-		pageinspect	\
 		passwordcheck	\
 		pg_archivecleanup \
-		pg_buffercache	\
-		pg_freespacemap \
 		pg_standby	\
-		pg_stat_statements \
 		pg_test_fsync	\
 		pg_trgm		\
 		pg_upgrade	\
 		pg_upgrade_support \
 		pgbench		\
 		pgcrypto	\
-		pgrowlocks	\
-		pgstattuple	\
 		seg		\
 		spi		\
 		tablefunc	\
diff --git a/contrib/auto_explain/Makefile b/contrib/auto_explain/Makefile
deleted file mode 100644
index 2d1443f..0000000
--- a/contrib/auto_explain/Makefile
+++ /dev/null
@@ -1,15 +0,0 @@
-# contrib/auto_explain/Makefile
-
-MODULE_big = auto_explain
-OBJS = auto_explain.o
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/auto_explain
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/auto_explain/auto_explain.c b/contrib/auto_explain/auto_explain.c
deleted file mode 100644
index b320698..0000000
--- a/contrib/auto_explain/auto_explain.c
+++ /dev/null
@@ -1,304 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * auto_explain.c
- *
- *
- * Copyright (c) 2008-2011, PostgreSQL Global Development Group
- *
- * IDENTIFICATION
- *	  contrib/auto_explain/auto_explain.c
- *
- *-------------------------------------------------------------------------
- */
-#include "postgres.h"
-
-#include "commands/explain.h"
-#include "executor/instrument.h"
-#include "utils/guc.h"
-
-PG_MODULE_MAGIC;
-
-/* GUC variables */
-static int	auto_explain_log_min_duration = -1; /* msec or -1 */
-static bool auto_explain_log_analyze = false;
-static bool auto_explain_log_verbose = false;
-static bool auto_explain_log_buffers = false;
-static int	auto_explain_log_format = EXPLAIN_FORMAT_TEXT;
-static bool auto_explain_log_nested_statements = false;
-
-static const struct config_enum_entry format_options[] = {
-	{"text", EXPLAIN_FORMAT_TEXT, false},
-	{"xml", EXPLAIN_FORMAT_XML, false},
-	{"json", EXPLAIN_FORMAT_JSON, false},
-	{"yaml", EXPLAIN_FORMAT_YAML, false},
-	{NULL, 0, false}
-};
-
-/* Current nesting depth of ExecutorRun calls */
-static int	nesting_level = 0;
-
-/* Saved hook values in case of unload */
-static ExecutorStart_hook_type prev_ExecutorStart = NULL;
-static ExecutorRun_hook_type prev_ExecutorRun = NULL;
-static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
-static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
-
-#define auto_explain_enabled() \
-	(auto_explain_log_min_duration >= 0 && \
-	 (nesting_level == 0 || auto_explain_log_nested_statements))
-
-void		_PG_init(void);
-void		_PG_fini(void);
-
-static void explain_ExecutorStart(QueryDesc *queryDesc, int eflags);
-static void explain_ExecutorRun(QueryDesc *queryDesc,
-					ScanDirection direction,
-					long count);
-static void explain_ExecutorFinish(QueryDesc *queryDesc);
-static void explain_ExecutorEnd(QueryDesc *queryDesc);
-
-
-/*
- * Module load callback
- */
-void
-_PG_init(void)
-{
-	/* Define custom GUC variables. */
-	DefineCustomIntVariable("auto_explain.log_min_duration",
-		 "Sets the minimum execution time above which plans will be logged.",
-						 "Zero prints all plans. -1 turns this feature off.",
-							&auto_explain_log_min_duration,
-							-1,
-							-1, INT_MAX / 1000,
-							PGC_SUSET,
-							GUC_UNIT_MS,
-							NULL,
-							NULL,
-							NULL);
-
-	DefineCustomBoolVariable("auto_explain.log_analyze",
-							 "Use EXPLAIN ANALYZE for plan logging.",
-							 NULL,
-							 &auto_explain_log_analyze,
-							 false,
-							 PGC_SUSET,
-							 0,
-							 NULL,
-							 NULL,
-							 NULL);
-
-	DefineCustomBoolVariable("auto_explain.log_verbose",
-							 "Use EXPLAIN VERBOSE for plan logging.",
-							 NULL,
-							 &auto_explain_log_verbose,
-							 false,
-							 PGC_SUSET,
-							 0,
-							 NULL,
-							 NULL,
-							 NULL);
-
-	DefineCustomBoolVariable("auto_explain.log_buffers",
-							 "Log buffers usage.",
-							 NULL,
-							 &auto_explain_log_buffers,
-							 false,
-							 PGC_SUSET,
-							 0,
-							 NULL,
-							 NULL,
-							 NULL);
-
-	DefineCustomEnumVariable("auto_explain.log_format",
-							 "EXPLAIN format to be used for plan logging.",
-							 NULL,
-							 &auto_explain_log_format,
-							 EXPLAIN_FORMAT_TEXT,
-							 format_options,
-							 PGC_SUSET,
-							 0,
-							 NULL,
-							 NULL,
-							 NULL);
-
-	DefineCustomBoolVariable("auto_explain.log_nested_statements",
-							 "Log nested statements.",
-							 NULL,
-							 &auto_explain_log_nested_statements,
-							 false,
-							 PGC_SUSET,
-							 0,
-							 NULL,
-							 NULL,
-							 NULL);
-
-	EmitWarningsOnPlaceholders("auto_explain");
-
-	/* Install hooks. */
-	prev_ExecutorStart = ExecutorStart_hook;
-	ExecutorStart_hook = explain_ExecutorStart;
-	prev_ExecutorRun = ExecutorRun_hook;
-	ExecutorRun_hook = explain_ExecutorRun;
-	prev_ExecutorFinish = ExecutorFinish_hook;
-	ExecutorFinish_hook = explain_ExecutorFinish;
-	prev_ExecutorEnd = ExecutorEnd_hook;
-	ExecutorEnd_hook = explain_ExecutorEnd;
-}
-
-/*
- * Module unload callback
- */
-void
-_PG_fini(void)
-{
-	/* Uninstall hooks. */
-	ExecutorStart_hook = prev_ExecutorStart;
-	ExecutorRun_hook = prev_ExecutorRun;
-	ExecutorFinish_hook = prev_ExecutorFinish;
-	ExecutorEnd_hook = prev_ExecutorEnd;
-}
-
-/*
- * ExecutorStart hook: start up logging if needed
- */
-static void
-explain_ExecutorStart(QueryDesc *queryDesc, int eflags)
-{
-	if (auto_explain_enabled())
-	{
-		/* Enable per-node instrumentation iff log_analyze is required. */
-		if (auto_explain_log_analyze && (eflags & EXEC_FLAG_EXPLAIN_ONLY) == 0)
-		{
-			queryDesc->instrument_options |= INSTRUMENT_TIMER;
-			if (auto_explain_log_buffers)
-				queryDesc->instrument_options |= INSTRUMENT_BUFFERS;
-		}
-	}
-
-	if (prev_ExecutorStart)
-		prev_ExecutorStart(queryDesc, eflags);
-	else
-		standard_ExecutorStart(queryDesc, eflags);
-
-	if (auto_explain_enabled())
-	{
-		/*
-		 * Set up to track total elapsed time in ExecutorRun.  Make sure the
-		 * space is allocated in the per-query context so it will go away at
-		 * ExecutorEnd.
-		 */
-		if (queryDesc->totaltime == NULL)
-		{
-			MemoryContext oldcxt;
-
-			oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
-			queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
-			MemoryContextSwitchTo(oldcxt);
-		}
-	}
-}
-
-/*
- * ExecutorRun hook: all we need do is track nesting depth
- */
-static void
-explain_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
-{
-	nesting_level++;
-	PG_TRY();
-	{
-		if (prev_ExecutorRun)
-			prev_ExecutorRun(queryDesc, direction, count);
-		else
-			standard_ExecutorRun(queryDesc, direction, count);
-		nesting_level--;
-	}
-	PG_CATCH();
-	{
-		nesting_level--;
-		PG_RE_THROW();
-	}
-	PG_END_TRY();
-}
-
-/*
- * ExecutorFinish hook: all we need do is track nesting depth
- */
-static void
-explain_ExecutorFinish(QueryDesc *queryDesc)
-{
-	nesting_level++;
-	PG_TRY();
-	{
-		if (prev_ExecutorFinish)
-			prev_ExecutorFinish(queryDesc);
-		else
-			standard_ExecutorFinish(queryDesc);
-		nesting_level--;
-	}
-	PG_CATCH();
-	{
-		nesting_level--;
-		PG_RE_THROW();
-	}
-	PG_END_TRY();
-}
-
-/*
- * ExecutorEnd hook: log results if needed
- */
-static void
-explain_ExecutorEnd(QueryDesc *queryDesc)
-{
-	if (queryDesc->totaltime && auto_explain_enabled())
-	{
-		double		msec;
-
-		/*
-		 * Make sure stats accumulation is done.  (Note: it's okay if several
-		 * levels of hook all do this.)
-		 */
-		InstrEndLoop(queryDesc->totaltime);
-
-		/* Log plan if duration is exceeded. */
-		msec = queryDesc->totaltime->total * 1000.0;
-		if (msec >= auto_explain_log_min_duration)
-		{
-			ExplainState es;
-
-			ExplainInitState(&es);
-			es.analyze = (queryDesc->instrument_options && auto_explain_log_analyze);
-			es.verbose = auto_explain_log_verbose;
-			es.buffers = (es.analyze && auto_explain_log_buffers);
-			es.format = auto_explain_log_format;
-
-			ExplainBeginOutput(&es);
-			ExplainQueryText(&es, queryDesc);
-			ExplainPrintPlan(&es, queryDesc);
-			ExplainEndOutput(&es);
-
-			/* Remove last line break */
-			if (es.str->len > 0 && es.str->data[es.str->len - 1] == '\n')
-				es.str->data[--es.str->len] = '\0';
-
-			/*
-			 * Note: we rely on the existing logging of context or
-			 * debug_query_string to identify just which statement is being
-			 * reported.  This isn't ideal but trying to do it here would
-			 * often result in duplication.
-			 */
-			ereport(LOG,
-					(errmsg("duration: %.3f ms  plan:\n%s",
-							msec, es.str->data),
-					 errhidestmt(true)));
-
-			pfree(es.str->data);
-		}
-	}
-
-	if (prev_ExecutorEnd)
-		prev_ExecutorEnd(queryDesc);
-	else
-		standard_ExecutorEnd(queryDesc);
-}
diff --git a/contrib/pageinspect/Makefile b/contrib/pageinspect/Makefile
deleted file mode 100644
index 13ba6d3..0000000
--- a/contrib/pageinspect/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pageinspect/Makefile
-
-MODULE_big	= pageinspect
-OBJS		= rawpage.o heapfuncs.o btreefuncs.o fsmfuncs.o
-
-EXTENSION = pageinspect
-DATA = pageinspect--1.0.sql pageinspect--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pageinspect
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
deleted file mode 100644
index ef27cd4..0000000
--- a/contrib/pageinspect/btreefuncs.c
+++ /dev/null
@@ -1,502 +0,0 @@
-/*
- * contrib/pageinspect/btreefuncs.c
- *
- *
- * btreefuncs.c
- *
- * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
- *
- * Permission to use, copy, modify, and distribute this software and
- * its documentation for any purpose, without fee, and without a
- * written agreement is hereby granted, provided that the above
- * copyright notice and this paragraph and the following two
- * paragraphs appear in all copies.
- *
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
- * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
- * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
- * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
- * OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
- * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
- * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
- */
-
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "access/nbtree.h"
-#include "catalog/namespace.h"
-#include "catalog/pg_type.h"
-#include "funcapi.h"
-#include "miscadmin.h"
-#include "storage/bufmgr.h"
-#include "utils/builtins.h"
-
-
-extern Datum bt_metap(PG_FUNCTION_ARGS);
-extern Datum bt_page_items(PG_FUNCTION_ARGS);
-extern Datum bt_page_stats(PG_FUNCTION_ARGS);
-
-PG_FUNCTION_INFO_V1(bt_metap);
-PG_FUNCTION_INFO_V1(bt_page_items);
-PG_FUNCTION_INFO_V1(bt_page_stats);
-
-#define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
-#define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
-
-#define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
-		if ( !(FirstOffsetNumber <= (offnum) && \
-						(offnum) <= PageGetMaxOffsetNumber(pg)) ) \
-			 elog(ERROR, "page offset number out of range"); }
-
-/* note: BlockNumber is unsigned, hence can't be negative */
-#define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
-		if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
-			 elog(ERROR, "block number out of range"); }
-
-/* ------------------------------------------------
- * structure for single btree page statistics
- * ------------------------------------------------
- */
-typedef struct BTPageStat
-{
-	uint32		blkno;
-	uint32		live_items;
-	uint32		dead_items;
-	uint32		page_size;
-	uint32		max_avail;
-	uint32		free_size;
-	uint32		avg_item_size;
-	char		type;
-
-	/* opaque data */
-	BlockNumber btpo_prev;
-	BlockNumber btpo_next;
-	union
-	{
-		uint32		level;
-		TransactionId xact;
-	}			btpo;
-	uint16		btpo_flags;
-	BTCycleId	btpo_cycleid;
-} BTPageStat;
-
-
-/* -------------------------------------------------
- * GetBTPageStatistics()
- *
- * Collect statistics of single b-tree page
- * -------------------------------------------------
- */
-static void
-GetBTPageStatistics(BlockNumber blkno, Buffer buffer, BTPageStat *stat)
-{
-	Page		page = BufferGetPage(buffer);
-	PageHeader	phdr = (PageHeader) page;
-	OffsetNumber maxoff = PageGetMaxOffsetNumber(page);
-	BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);
-	int			item_size = 0;
-	int			off;
-
-	stat->blkno = blkno;
-
-	stat->max_avail = BLCKSZ - (BLCKSZ - phdr->pd_special + SizeOfPageHeaderData);
-
-	stat->dead_items = stat->live_items = 0;
-
-	stat->page_size = PageGetPageSize(page);
-
-	/* page type (flags) */
-	if (P_ISDELETED(opaque))
-	{
-		stat->type = 'd';
-		stat->btpo.xact = opaque->btpo.xact;
-		return;
-	}
-	else if (P_IGNORE(opaque))
-		stat->type = 'e';
-	else if (P_ISLEAF(opaque))
-		stat->type = 'l';
-	else if (P_ISROOT(opaque))
-		stat->type = 'r';
-	else
-		stat->type = 'i';
-
-	/* btpage opaque data */
-	stat->btpo_prev = opaque->btpo_prev;
-	stat->btpo_next = opaque->btpo_next;
-	stat->btpo.level = opaque->btpo.level;
-	stat->btpo_flags = opaque->btpo_flags;
-	stat->btpo_cycleid = opaque->btpo_cycleid;
-
-	/* count live and dead tuples, and free space */
-	for (off = FirstOffsetNumber; off <= maxoff; off++)
-	{
-		IndexTuple	itup;
-
-		ItemId		id = PageGetItemId(page, off);
-
-		itup = (IndexTuple) PageGetItem(page, id);
-
-		item_size += IndexTupleSize(itup);
-
-		if (!ItemIdIsDead(id))
-			stat->live_items++;
-		else
-			stat->dead_items++;
-	}
-	stat->free_size = PageGetFreeSpace(page);
-
-	if ((stat->live_items + stat->dead_items) > 0)
-		stat->avg_item_size = item_size / (stat->live_items + stat->dead_items);
-	else
-		stat->avg_item_size = 0;
-}
-
-/* -----------------------------------------------
- * bt_page()
- *
- * Usage: SELECT * FROM bt_page('t1_pkey', 1);
- * -----------------------------------------------
- */
-Datum
-bt_page_stats(PG_FUNCTION_ARGS)
-{
-	text	   *relname = PG_GETARG_TEXT_P(0);
-	uint32		blkno = PG_GETARG_UINT32(1);
-	Buffer		buffer;
-	Relation	rel;
-	RangeVar   *relrv;
-	Datum		result;
-	HeapTuple	tuple;
-	TupleDesc	tupleDesc;
-	int			j;
-	char	   *values[11];
-	BTPageStat	stat;
-
-	if (!superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 (errmsg("must be superuser to use pageinspect functions"))));
-
-	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-	rel = relation_openrv(relrv, AccessShareLock);
-
-	if (!IS_INDEX(rel) || !IS_BTREE(rel))
-		elog(ERROR, "relation \"%s\" is not a btree index",
-			 RelationGetRelationName(rel));
-
-	/*
-	 * Reject attempts to read non-local temporary relations; we would be
-	 * likely to get wrong data since we have no visibility into the owning
-	 * session's local buffers.
-	 */
-	if (RELATION_IS_OTHER_TEMP(rel))
-		ereport(ERROR,
-				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-				 errmsg("cannot access temporary tables of other sessions")));
-
-	if (blkno == 0)
-		elog(ERROR, "block 0 is a meta page");
-
-	CHECK_RELATION_BLOCK_RANGE(rel, blkno);
-
-	buffer = ReadBuffer(rel, blkno);
-
-	/* keep compiler quiet */
-	stat.btpo_prev = stat.btpo_next = InvalidBlockNumber;
-	stat.btpo_flags = stat.free_size = stat.avg_item_size = 0;
-
-	GetBTPageStatistics(blkno, buffer, &stat);
-
-	/* Build a tuple descriptor for our result type */
-	if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
-		elog(ERROR, "return type must be a row type");
-
-	j = 0;
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", stat.blkno);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%c", stat.type);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", stat.live_items);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", stat.dead_items);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", stat.avg_item_size);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", stat.page_size);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", stat.free_size);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", stat.btpo_prev);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", stat.btpo_next);
-	values[j] = palloc(32);
-	if (stat.type == 'd')
-		snprintf(values[j++], 32, "%d", stat.btpo.xact);
-	else
-		snprintf(values[j++], 32, "%d", stat.btpo.level);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", stat.btpo_flags);
-
-	tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
-								   values);
-
-	result = HeapTupleGetDatum(tuple);
-
-	ReleaseBuffer(buffer);
-
-	relation_close(rel, AccessShareLock);
-
-	PG_RETURN_DATUM(result);
-}
-
-/*-------------------------------------------------------
- * bt_page_items()
- *
- * Get IndexTupleData set in a btree page
- *
- * Usage: SELECT * FROM bt_page_items('t1_pkey', 1);
- *-------------------------------------------------------
- */
-
-/*
- * cross-call data structure for SRF
- */
-struct user_args
-{
-	Page		page;
-	OffsetNumber offset;
-};
-
-Datum
-bt_page_items(PG_FUNCTION_ARGS)
-{
-	text	   *relname = PG_GETARG_TEXT_P(0);
-	uint32		blkno = PG_GETARG_UINT32(1);
-	Datum		result;
-	char	   *values[6];
-	HeapTuple	tuple;
-	FuncCallContext *fctx;
-	MemoryContext mctx;
-	struct user_args *uargs;
-
-	if (!superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 (errmsg("must be superuser to use pageinspect functions"))));
-
-	if (SRF_IS_FIRSTCALL())
-	{
-		RangeVar   *relrv;
-		Relation	rel;
-		Buffer		buffer;
-		BTPageOpaque opaque;
-		TupleDesc	tupleDesc;
-
-		fctx = SRF_FIRSTCALL_INIT();
-
-		relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-		rel = relation_openrv(relrv, AccessShareLock);
-
-		if (!IS_INDEX(rel) || !IS_BTREE(rel))
-			elog(ERROR, "relation \"%s\" is not a btree index",
-				 RelationGetRelationName(rel));
-
-		/*
-		 * Reject attempts to read non-local temporary relations; we would be
-		 * likely to get wrong data since we have no visibility into the
-		 * owning session's local buffers.
-		 */
-		if (RELATION_IS_OTHER_TEMP(rel))
-			ereport(ERROR,
-					(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-				errmsg("cannot access temporary tables of other sessions")));
-
-		if (blkno == 0)
-			elog(ERROR, "block 0 is a meta page");
-
-		CHECK_RELATION_BLOCK_RANGE(rel, blkno);
-
-		buffer = ReadBuffer(rel, blkno);
-
-		/*
-		 * We copy the page into local storage to avoid holding pin on the
-		 * buffer longer than we must, and possibly failing to release it at
-		 * all if the calling query doesn't fetch all rows.
-		 */
-		mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
-
-		uargs = palloc(sizeof(struct user_args));
-
-		uargs->page = palloc(BLCKSZ);
-		memcpy(uargs->page, BufferGetPage(buffer), BLCKSZ);
-
-		ReleaseBuffer(buffer);
-		relation_close(rel, AccessShareLock);
-
-		uargs->offset = FirstOffsetNumber;
-
-		opaque = (BTPageOpaque) PageGetSpecialPointer(uargs->page);
-
-		if (P_ISDELETED(opaque))
-			elog(NOTICE, "page is deleted");
-
-		fctx->max_calls = PageGetMaxOffsetNumber(uargs->page);
-
-		/* Build a tuple descriptor for our result type */
-		if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
-			elog(ERROR, "return type must be a row type");
-
-		fctx->attinmeta = TupleDescGetAttInMetadata(tupleDesc);
-
-		fctx->user_fctx = uargs;
-
-		MemoryContextSwitchTo(mctx);
-	}
-
-	fctx = SRF_PERCALL_SETUP();
-	uargs = fctx->user_fctx;
-
-	if (fctx->call_cntr < fctx->max_calls)
-	{
-		ItemId		id;
-		IndexTuple	itup;
-		int			j;
-		int			off;
-		int			dlen;
-		char	   *dump;
-		char	   *ptr;
-
-		id = PageGetItemId(uargs->page, uargs->offset);
-
-		if (!ItemIdIsValid(id))
-			elog(ERROR, "invalid ItemId");
-
-		itup = (IndexTuple) PageGetItem(uargs->page, id);
-
-		j = 0;
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, "%d", uargs->offset);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, "(%u,%u)",
-				 BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
-				 itup->t_tid.ip_posid);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, "%d", (int) IndexTupleSize(itup));
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, "%c", IndexTupleHasNulls(itup) ? 't' : 'f');
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, "%c", IndexTupleHasVarwidths(itup) ? 't' : 'f');
-
-		ptr = (char *) itup + IndexInfoFindDataOffset(itup->t_info);
-		dlen = IndexTupleSize(itup) - IndexInfoFindDataOffset(itup->t_info);
-		dump = palloc0(dlen * 3 + 1);
-		values[j] = dump;
-		for (off = 0; off < dlen; off++)
-		{
-			if (off > 0)
-				*dump++ = ' ';
-			sprintf(dump, "%02x", *(ptr + off) & 0xff);
-			dump += 2;
-		}
-
-		tuple = BuildTupleFromCStrings(fctx->attinmeta, values);
-		result = HeapTupleGetDatum(tuple);
-
-		uargs->offset = uargs->offset + 1;
-
-		SRF_RETURN_NEXT(fctx, result);
-	}
-	else
-	{
-		pfree(uargs->page);
-		pfree(uargs);
-		SRF_RETURN_DONE(fctx);
-	}
-}
-
-
-/* ------------------------------------------------
- * bt_metap()
- *
- * Get a btree's meta-page information
- *
- * Usage: SELECT * FROM bt_metap('t1_pkey')
- * ------------------------------------------------
- */
-Datum
-bt_metap(PG_FUNCTION_ARGS)
-{
-	text	   *relname = PG_GETARG_TEXT_P(0);
-	Datum		result;
-	Relation	rel;
-	RangeVar   *relrv;
-	BTMetaPageData *metad;
-	TupleDesc	tupleDesc;
-	int			j;
-	char	   *values[6];
-	Buffer		buffer;
-	Page		page;
-	HeapTuple	tuple;
-
-	if (!superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 (errmsg("must be superuser to use pageinspect functions"))));
-
-	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-	rel = relation_openrv(relrv, AccessShareLock);
-
-	if (!IS_INDEX(rel) || !IS_BTREE(rel))
-		elog(ERROR, "relation \"%s\" is not a btree index",
-			 RelationGetRelationName(rel));
-
-	/*
-	 * Reject attempts to read non-local temporary relations; we would be
-	 * likely to get wrong data since we have no visibility into the owning
-	 * session's local buffers.
-	 */
-	if (RELATION_IS_OTHER_TEMP(rel))
-		ereport(ERROR,
-				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-				 errmsg("cannot access temporary tables of other sessions")));
-
-	buffer = ReadBuffer(rel, 0);
-	page = BufferGetPage(buffer);
-	metad = BTPageGetMeta(page);
-
-	/* Build a tuple descriptor for our result type */
-	if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
-		elog(ERROR, "return type must be a row type");
-
-	j = 0;
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", metad->btm_magic);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", metad->btm_version);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", metad->btm_root);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", metad->btm_level);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", metad->btm_fastroot);
-	values[j] = palloc(32);
-	snprintf(values[j++], 32, "%d", metad->btm_fastlevel);
-
-	tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
-								   values);
-
-	result = HeapTupleGetDatum(tuple);
-
-	ReleaseBuffer(buffer);
-
-	relation_close(rel, AccessShareLock);
-
-	PG_RETURN_DATUM(result);
-}
diff --git a/contrib/pageinspect/fsmfuncs.c b/contrib/pageinspect/fsmfuncs.c
deleted file mode 100644
index 38c4e23..0000000
--- a/contrib/pageinspect/fsmfuncs.c
+++ /dev/null
@@ -1,59 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * fsmfuncs.c
- *	  Functions to investigate FSM pages
- *
- * These functions are restricted to superusers for the fear of introducing
- * security holes if the input checking isn't as water-tight as it should.
- * You'd need to be superuser to obtain a raw page image anyway, so
- * there's hardly any use case for using these without superuser-rights
- * anyway.
- *
- * Copyright (c) 2007-2011, PostgreSQL Global Development Group
- *
- * IDENTIFICATION
- *	  contrib/pageinspect/fsmfuncs.c
- *
- *-------------------------------------------------------------------------
- */
-
-#include "postgres.h"
-#include "lib/stringinfo.h"
-#include "storage/fsm_internals.h"
-#include "utils/builtins.h"
-#include "miscadmin.h"
-#include "funcapi.h"
-
-Datum		fsm_page_contents(PG_FUNCTION_ARGS);
-
-/*
- * Dumps the contents of a FSM page.
- */
-PG_FUNCTION_INFO_V1(fsm_page_contents);
-
-Datum
-fsm_page_contents(PG_FUNCTION_ARGS)
-{
-	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
-	StringInfoData sinfo;
-	FSMPage		fsmpage;
-	int			i;
-
-	if (!superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 (errmsg("must be superuser to use raw page functions"))));
-
-	fsmpage = (FSMPage) PageGetContents(VARDATA(raw_page));
-
-	initStringInfo(&sinfo);
-
-	for (i = 0; i < NodesPerPage; i++)
-	{
-		if (fsmpage->fp_nodes[i] != 0)
-			appendStringInfo(&sinfo, "%d: %d\n", i, fsmpage->fp_nodes[i]);
-	}
-	appendStringInfo(&sinfo, "fp_next_slot: %d\n", fsmpage->fp_next_slot);
-
-	PG_RETURN_TEXT_P(cstring_to_text(sinfo.data));
-}
diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c
deleted file mode 100644
index 20bca0d..0000000
--- a/contrib/pageinspect/heapfuncs.c
+++ /dev/null
@@ -1,230 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * heapfuncs.c
- *	  Functions to investigate heap pages
- *
- * We check the input to these functions for corrupt pointers etc. that
- * might cause crashes, but at the same time we try to print out as much
- * information as possible, even if it's nonsense. That's because if a
- * page is corrupt, we don't know why and how exactly it is corrupt, so we
- * let the user judge it.
- *
- * These functions are restricted to superusers for the fear of introducing
- * security holes if the input checking isn't as water-tight as it should be.
- * You'd need to be superuser to obtain a raw page image anyway, so
- * there's hardly any use case for using these without superuser-rights
- * anyway.
- *
- * Copyright (c) 2007-2011, PostgreSQL Global Development Group
- *
- * IDENTIFICATION
- *	  contrib/pageinspect/heapfuncs.c
- *
- *-------------------------------------------------------------------------
- */
-
-#include "postgres.h"
-
-#include "fmgr.h"
-#include "funcapi.h"
-#include "access/heapam.h"
-#include "access/transam.h"
-#include "catalog/namespace.h"
-#include "catalog/pg_type.h"
-#include "utils/builtins.h"
-#include "miscadmin.h"
-
-Datum		heap_page_items(PG_FUNCTION_ARGS);
-
-
-/*
- * bits_to_text
- *
- * Converts a bits8-array of 'len' bits to a human-readable
- * c-string representation.
- */
-static char *
-bits_to_text(bits8 *bits, int len)
-{
-	int			i;
-	char	   *str;
-
-	str = palloc(len + 1);
-
-	for (i = 0; i < len; i++)
-		str[i] = (bits[(i / 8)] & (1 << (i % 8))) ? '1' : '0';
-
-	str[i] = '\0';
-
-	return str;
-}
-
-
-/*
- * heap_page_items
- *
- * Allows inspection of line pointers and tuple headers of a heap page.
- */
-PG_FUNCTION_INFO_V1(heap_page_items);
-
-typedef struct heap_page_items_state
-{
-	TupleDesc	tupd;
-	Page		page;
-	uint16		offset;
-} heap_page_items_state;
-
-Datum
-heap_page_items(PG_FUNCTION_ARGS)
-{
-	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
-	heap_page_items_state *inter_call_data = NULL;
-	FuncCallContext *fctx;
-	int			raw_page_size;
-
-	if (!superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 (errmsg("must be superuser to use raw page functions"))));
-
-	raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
-
-	if (SRF_IS_FIRSTCALL())
-	{
-		TupleDesc	tupdesc;
-		MemoryContext mctx;
-
-		if (raw_page_size < SizeOfPageHeaderData)
-			ereport(ERROR,
-					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-				  errmsg("input page too small (%d bytes)", raw_page_size)));
-
-		fctx = SRF_FIRSTCALL_INIT();
-		mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
-
-		inter_call_data = palloc(sizeof(heap_page_items_state));
-
-		/* Build a tuple descriptor for our result type */
-		if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
-			elog(ERROR, "return type must be a row type");
-
-		inter_call_data->tupd = tupdesc;
-
-		inter_call_data->offset = FirstOffsetNumber;
-		inter_call_data->page = VARDATA(raw_page);
-
-		fctx->max_calls = PageGetMaxOffsetNumber(inter_call_data->page);
-		fctx->user_fctx = inter_call_data;
-
-		MemoryContextSwitchTo(mctx);
-	}
-
-	fctx = SRF_PERCALL_SETUP();
-	inter_call_data = fctx->user_fctx;
-
-	if (fctx->call_cntr < fctx->max_calls)
-	{
-		Page		page = inter_call_data->page;
-		HeapTuple	resultTuple;
-		Datum		result;
-		ItemId		id;
-		Datum		values[13];
-		bool		nulls[13];
-		uint16		lp_offset;
-		uint16		lp_flags;
-		uint16		lp_len;
-
-		memset(nulls, 0, sizeof(nulls));
-
-		/* Extract information from the line pointer */
-
-		id = PageGetItemId(page, inter_call_data->offset);
-
-		lp_offset = ItemIdGetOffset(id);
-		lp_flags = ItemIdGetFlags(id);
-		lp_len = ItemIdGetLength(id);
-
-		values[0] = UInt16GetDatum(inter_call_data->offset);
-		values[1] = UInt16GetDatum(lp_offset);
-		values[2] = UInt16GetDatum(lp_flags);
-		values[3] = UInt16GetDatum(lp_len);
-
-		/*
-		 * We do just enough validity checking to make sure we don't reference
-		 * data outside the page passed to us. The page could be corrupt in
-		 * many other ways, but at least we won't crash.
-		 */
-		if (ItemIdHasStorage(id) &&
-			lp_len >= sizeof(HeapTupleHeader) &&
-			lp_offset == MAXALIGN(lp_offset) &&
-			lp_offset + lp_len <= raw_page_size)
-		{
-			HeapTupleHeader tuphdr;
-			int			bits_len;
-
-			/* Extract information from the tuple header */
-
-			tuphdr = (HeapTupleHeader) PageGetItem(page, id);
-
-			values[4] = UInt32GetDatum(HeapTupleHeaderGetXmin(tuphdr));
-			values[5] = UInt32GetDatum(HeapTupleHeaderGetXmax(tuphdr));
-			values[6] = UInt32GetDatum(HeapTupleHeaderGetRawCommandId(tuphdr)); /* shared with xvac */
-			values[7] = PointerGetDatum(&tuphdr->t_ctid);
-			values[8] = UInt32GetDatum(tuphdr->t_infomask2);
-			values[9] = UInt32GetDatum(tuphdr->t_infomask);
-			values[10] = UInt8GetDatum(tuphdr->t_hoff);
-
-			/*
-			 * We already checked that the item as is completely within the
-			 * raw page passed to us, with the length given in the line
-			 * pointer.. Let's check that t_hoff doesn't point over lp_len,
-			 * before using it to access t_bits and oid.
-			 */
-			if (tuphdr->t_hoff >= sizeof(HeapTupleHeader) &&
-				tuphdr->t_hoff <= lp_len)
-			{
-				if (tuphdr->t_infomask & HEAP_HASNULL)
-				{
-					bits_len = tuphdr->t_hoff -
-						(((char *) tuphdr->t_bits) -((char *) tuphdr));
-
-					values[11] = CStringGetTextDatum(
-								 bits_to_text(tuphdr->t_bits, bits_len * 8));
-				}
-				else
-					nulls[11] = true;
-
-				if (tuphdr->t_infomask & HEAP_HASOID)
-					values[12] = HeapTupleHeaderGetOid(tuphdr);
-				else
-					nulls[12] = true;
-			}
-			else
-			{
-				nulls[11] = true;
-				nulls[12] = true;
-			}
-		}
-		else
-		{
-			/*
-			 * The line pointer is not used, or it's invalid. Set the rest of
-			 * the fields to NULL
-			 */
-			int			i;
-
-			for (i = 4; i <= 12; i++)
-				nulls[i] = true;
-		}
-
-		/* Build and return the result tuple. */
-		resultTuple = heap_form_tuple(inter_call_data->tupd, values, nulls);
-		result = HeapTupleGetDatum(resultTuple);
-
-		inter_call_data->offset++;
-
-		SRF_RETURN_NEXT(fctx, result);
-	}
-	else
-		SRF_RETURN_DONE(fctx);
-}
diff --git a/contrib/pageinspect/pageinspect--1.0.sql b/contrib/pageinspect/pageinspect--1.0.sql
deleted file mode 100644
index a711f58..0000000
--- a/contrib/pageinspect/pageinspect--1.0.sql
+++ /dev/null
@@ -1,104 +0,0 @@
-/* contrib/pageinspect/pageinspect--1.0.sql */
-
---
--- get_raw_page()
---
-CREATE FUNCTION get_raw_page(text, int4)
-RETURNS bytea
-AS 'MODULE_PATHNAME', 'get_raw_page'
-LANGUAGE C STRICT;
-
-CREATE FUNCTION get_raw_page(text, text, int4)
-RETURNS bytea
-AS 'MODULE_PATHNAME', 'get_raw_page_fork'
-LANGUAGE C STRICT;
-
---
--- page_header()
---
-CREATE FUNCTION page_header(IN page bytea,
-    OUT lsn text,
-    OUT tli smallint,
-    OUT flags smallint,
-    OUT lower smallint,
-    OUT upper smallint,
-    OUT special smallint,
-    OUT pagesize smallint,
-    OUT version smallint,
-    OUT prune_xid xid)
-AS 'MODULE_PATHNAME', 'page_header'
-LANGUAGE C STRICT;
-
---
--- heap_page_items()
---
-CREATE FUNCTION heap_page_items(IN page bytea,
-    OUT lp smallint,
-    OUT lp_off smallint,
-    OUT lp_flags smallint,
-    OUT lp_len smallint,
-    OUT t_xmin xid,
-    OUT t_xmax xid,
-    OUT t_field3 int4,
-    OUT t_ctid tid,
-    OUT t_infomask2 integer,
-    OUT t_infomask integer,
-    OUT t_hoff smallint,
-    OUT t_bits text,
-    OUT t_oid oid)
-RETURNS SETOF record
-AS 'MODULE_PATHNAME', 'heap_page_items'
-LANGUAGE C STRICT;
-
---
--- bt_metap()
---
-CREATE FUNCTION bt_metap(IN relname text,
-    OUT magic int4,
-    OUT version int4,
-    OUT root int4,
-    OUT level int4,
-    OUT fastroot int4,
-    OUT fastlevel int4)
-AS 'MODULE_PATHNAME', 'bt_metap'
-LANGUAGE C STRICT;
-
---
--- bt_page_stats()
---
-CREATE FUNCTION bt_page_stats(IN relname text, IN blkno int4,
-    OUT blkno int4,
-    OUT type "char",
-    OUT live_items int4,
-    OUT dead_items int4,
-    OUT avg_item_size int4,
-    OUT page_size int4,
-    OUT free_size int4,
-    OUT btpo_prev int4,
-    OUT btpo_next int4,
-    OUT btpo int4,
-    OUT btpo_flags int4)
-AS 'MODULE_PATHNAME', 'bt_page_stats'
-LANGUAGE C STRICT;
-
---
--- bt_page_items()
---
-CREATE FUNCTION bt_page_items(IN relname text, IN blkno int4,
-    OUT itemoffset smallint,
-    OUT ctid tid,
-    OUT itemlen smallint,
-    OUT nulls bool,
-    OUT vars bool,
-    OUT data text)
-RETURNS SETOF record
-AS 'MODULE_PATHNAME', 'bt_page_items'
-LANGUAGE C STRICT;
-
---
--- fsm_page_contents()
---
-CREATE FUNCTION fsm_page_contents(IN page bytea)
-RETURNS text
-AS 'MODULE_PATHNAME', 'fsm_page_contents'
-LANGUAGE C STRICT;
diff --git a/contrib/pageinspect/pageinspect--unpackaged--1.0.sql b/contrib/pageinspect/pageinspect--unpackaged--1.0.sql
deleted file mode 100644
index 7d4feaf..0000000
--- a/contrib/pageinspect/pageinspect--unpackaged--1.0.sql
+++ /dev/null
@@ -1,28 +0,0 @@
-/* contrib/pageinspect/pageinspect--unpackaged--1.0.sql */
-
-DROP FUNCTION heap_page_items(bytea);
-CREATE FUNCTION heap_page_items(IN page bytea,
-	OUT lp smallint,
-	OUT lp_off smallint,
-	OUT lp_flags smallint,
-	OUT lp_len smallint,
-	OUT t_xmin xid,
-	OUT t_xmax xid,
-	OUT t_field3 int4,
-	OUT t_ctid tid,
-	OUT t_infomask2 integer,
-	OUT t_infomask integer,
-	OUT t_hoff smallint,
-	OUT t_bits text,
-	OUT t_oid oid)
-RETURNS SETOF record
-AS 'MODULE_PATHNAME', 'heap_page_items'
-LANGUAGE C STRICT;
-
-ALTER EXTENSION pageinspect ADD function get_raw_page(text,integer);
-ALTER EXTENSION pageinspect ADD function get_raw_page(text,text,integer);
-ALTER EXTENSION pageinspect ADD function page_header(bytea);
-ALTER EXTENSION pageinspect ADD function bt_metap(text);
-ALTER EXTENSION pageinspect ADD function bt_page_stats(text,integer);
-ALTER EXTENSION pageinspect ADD function bt_page_items(text,integer);
-ALTER EXTENSION pageinspect ADD function fsm_page_contents(bytea);
diff --git a/contrib/pageinspect/pageinspect.control b/contrib/pageinspect/pageinspect.control
deleted file mode 100644
index f9da0e8..0000000
--- a/contrib/pageinspect/pageinspect.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pageinspect extension
-comment = 'inspect the contents of database pages at a low level'
-default_version = '1.0'
-module_pathname = '$libdir/pageinspect'
-relocatable = true
diff --git a/contrib/pageinspect/rawpage.c b/contrib/pageinspect/rawpage.c
deleted file mode 100644
index 2607576..0000000
--- a/contrib/pageinspect/rawpage.c
+++ /dev/null
@@ -1,232 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * rawpage.c
- *	  Functions to extract a raw page as bytea and inspect it
- *
- * Access-method specific inspection functions are in separate files.
- *
- * Copyright (c) 2007-2011, PostgreSQL Global Development Group
- *
- * IDENTIFICATION
- *	  contrib/pageinspect/rawpage.c
- *
- *-------------------------------------------------------------------------
- */
-
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "access/transam.h"
-#include "catalog/catalog.h"
-#include "catalog/namespace.h"
-#include "catalog/pg_type.h"
-#include "fmgr.h"
-#include "funcapi.h"
-#include "miscadmin.h"
-#include "storage/bufmgr.h"
-#include "utils/builtins.h"
-
-PG_MODULE_MAGIC;
-
-Datum		get_raw_page(PG_FUNCTION_ARGS);
-Datum		get_raw_page_fork(PG_FUNCTION_ARGS);
-Datum		page_header(PG_FUNCTION_ARGS);
-
-static bytea *get_raw_page_internal(text *relname, ForkNumber forknum,
-					  BlockNumber blkno);
-
-
-/*
- * get_raw_page
- *
- * Returns a copy of a page from shared buffers as a bytea
- */
-PG_FUNCTION_INFO_V1(get_raw_page);
-
-Datum
-get_raw_page(PG_FUNCTION_ARGS)
-{
-	text	   *relname = PG_GETARG_TEXT_P(0);
-	uint32		blkno = PG_GETARG_UINT32(1);
-	bytea	   *raw_page;
-
-	/*
-	 * We don't normally bother to check the number of arguments to a C
-	 * function, but here it's needed for safety because early 8.4 beta
-	 * releases mistakenly redefined get_raw_page() as taking three arguments.
-	 */
-	if (PG_NARGS() != 2)
-		ereport(ERROR,
-				(errmsg("wrong number of arguments to get_raw_page()"),
-				 errhint("Run the updated pageinspect.sql script.")));
-
-	raw_page = get_raw_page_internal(relname, MAIN_FORKNUM, blkno);
-
-	PG_RETURN_BYTEA_P(raw_page);
-}
-
-/*
- * get_raw_page_fork
- *
- * Same, for any fork
- */
-PG_FUNCTION_INFO_V1(get_raw_page_fork);
-
-Datum
-get_raw_page_fork(PG_FUNCTION_ARGS)
-{
-	text	   *relname = PG_GETARG_TEXT_P(0);
-	text	   *forkname = PG_GETARG_TEXT_P(1);
-	uint32		blkno = PG_GETARG_UINT32(2);
-	bytea	   *raw_page;
-	ForkNumber	forknum;
-
-	forknum = forkname_to_number(text_to_cstring(forkname));
-
-	raw_page = get_raw_page_internal(relname, forknum, blkno);
-
-	PG_RETURN_BYTEA_P(raw_page);
-}
-
-/*
- * workhorse
- */
-static bytea *
-get_raw_page_internal(text *relname, ForkNumber forknum, BlockNumber blkno)
-{
-	bytea	   *raw_page;
-	RangeVar   *relrv;
-	Relation	rel;
-	char	   *raw_page_data;
-	Buffer		buf;
-
-	if (!superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 (errmsg("must be superuser to use raw functions"))));
-
-	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-	rel = relation_openrv(relrv, AccessShareLock);
-
-	/* Check that this relation has storage */
-	if (rel->rd_rel->relkind == RELKIND_VIEW)
-		ereport(ERROR,
-				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
-				 errmsg("cannot get raw page from view \"%s\"",
-						RelationGetRelationName(rel))));
-	if (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
-		ereport(ERROR,
-				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
-				 errmsg("cannot get raw page from composite type \"%s\"",
-						RelationGetRelationName(rel))));
-	if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
-		ereport(ERROR,
-				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
-				 errmsg("cannot get raw page from foreign table \"%s\"",
-						RelationGetRelationName(rel))));
-
-	/*
-	 * Reject attempts to read non-local temporary relations; we would be
-	 * likely to get wrong data since we have no visibility into the owning
-	 * session's local buffers.
-	 */
-	if (RELATION_IS_OTHER_TEMP(rel))
-		ereport(ERROR,
-				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-				 errmsg("cannot access temporary tables of other sessions")));
-
-	if (blkno >= RelationGetNumberOfBlocks(rel))
-		elog(ERROR, "block number %u is out of range for relation \"%s\"",
-			 blkno, RelationGetRelationName(rel));
-
-	/* Initialize buffer to copy to */
-	raw_page = (bytea *) palloc(BLCKSZ + VARHDRSZ);
-	SET_VARSIZE(raw_page, BLCKSZ + VARHDRSZ);
-	raw_page_data = VARDATA(raw_page);
-
-	/* Take a verbatim copy of the page */
-
-	buf = ReadBufferExtended(rel, forknum, blkno, RBM_NORMAL, NULL);
-	LockBuffer(buf, BUFFER_LOCK_SHARE);
-
-	memcpy(raw_page_data, BufferGetPage(buf), BLCKSZ);
-
-	LockBuffer(buf, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buf);
-
-	relation_close(rel, AccessShareLock);
-
-	return raw_page;
-}
-
-/*
- * page_header
- *
- * Allows inspection of page header fields of a raw page
- */
-
-PG_FUNCTION_INFO_V1(page_header);
-
-Datum
-page_header(PG_FUNCTION_ARGS)
-{
-	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
-	int			raw_page_size;
-
-	TupleDesc	tupdesc;
-
-	Datum		result;
-	HeapTuple	tuple;
-	Datum		values[9];
-	bool		nulls[9];
-
-	PageHeader	page;
-	XLogRecPtr	lsn;
-	char		lsnchar[64];
-
-	if (!superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 (errmsg("must be superuser to use raw page functions"))));
-
-	raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
-
-	/*
-	 * Check that enough data was supplied, so that we don't try to access
-	 * fields outside the supplied buffer.
-	 */
-	if (raw_page_size < sizeof(PageHeaderData))
-		ereport(ERROR,
-				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-				 errmsg("input page too small (%d bytes)", raw_page_size)));
-
-	page = (PageHeader) VARDATA(raw_page);
-
-	/* Build a tuple descriptor for our result type */
-	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
-		elog(ERROR, "return type must be a row type");
-
-	/* Extract information from the page header */
-
-	lsn = PageGetLSN(page);
-	snprintf(lsnchar, sizeof(lsnchar), "%X/%X", lsn.xlogid, lsn.xrecoff);
-
-	values[0] = CStringGetTextDatum(lsnchar);
-	values[1] = UInt16GetDatum(PageGetTLI(page));
-	values[2] = UInt16GetDatum(page->pd_flags);
-	values[3] = UInt16GetDatum(page->pd_lower);
-	values[4] = UInt16GetDatum(page->pd_upper);
-	values[5] = UInt16GetDatum(page->pd_special);
-	values[6] = UInt16GetDatum(PageGetPageSize(page));
-	values[7] = UInt16GetDatum(PageGetPageLayoutVersion(page));
-	values[8] = TransactionIdGetDatum(page->pd_prune_xid);
-
-	/* Build and return the tuple. */
-
-	memset(nulls, 0, sizeof(nulls));
-
-	tuple = heap_form_tuple(tupdesc, values, nulls);
-	result = HeapTupleGetDatum(tuple);
-
-	PG_RETURN_DATUM(result);
-}
diff --git a/contrib/pg_buffercache/Makefile b/contrib/pg_buffercache/Makefile
deleted file mode 100644
index 323c0ac..0000000
--- a/contrib/pg_buffercache/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pg_buffercache/Makefile
-
-MODULE_big = pg_buffercache
-OBJS = pg_buffercache_pages.o
-
-EXTENSION = pg_buffercache
-DATA = pg_buffercache--1.0.sql pg_buffercache--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pg_buffercache
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pg_buffercache/pg_buffercache--1.0.sql b/contrib/pg_buffercache/pg_buffercache--1.0.sql
deleted file mode 100644
index 9407d21..0000000
--- a/contrib/pg_buffercache/pg_buffercache--1.0.sql
+++ /dev/null
@@ -1,17 +0,0 @@
-/* contrib/pg_buffercache/pg_buffercache--1.0.sql */
-
--- Register the function.
-CREATE FUNCTION pg_buffercache_pages()
-RETURNS SETOF RECORD
-AS 'MODULE_PATHNAME', 'pg_buffercache_pages'
-LANGUAGE C;
-
--- Create a view for convenient access.
-CREATE VIEW pg_buffercache AS
-	SELECT P.* FROM pg_buffercache_pages() AS P
-	(bufferid integer, relfilenode oid, reltablespace oid, reldatabase oid,
-	 relforknumber int2, relblocknumber int8, isdirty bool, usagecount int2);
-
--- Don't want these to be available to public.
-REVOKE ALL ON FUNCTION pg_buffercache_pages() FROM PUBLIC;
-REVOKE ALL ON pg_buffercache FROM PUBLIC;
diff --git a/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql b/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
deleted file mode 100644
index f00a954..0000000
--- a/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
+++ /dev/null
@@ -1,4 +0,0 @@
-/* contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql */
-
-ALTER EXTENSION pg_buffercache ADD function pg_buffercache_pages();
-ALTER EXTENSION pg_buffercache ADD view pg_buffercache;
diff --git a/contrib/pg_buffercache/pg_buffercache.control b/contrib/pg_buffercache/pg_buffercache.control
deleted file mode 100644
index 709513c..0000000
--- a/contrib/pg_buffercache/pg_buffercache.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pg_buffercache extension
-comment = 'examine the shared buffer cache'
-default_version = '1.0'
-module_pathname = '$libdir/pg_buffercache'
-relocatable = true
diff --git a/contrib/pg_buffercache/pg_buffercache_pages.c b/contrib/pg_buffercache/pg_buffercache_pages.c
deleted file mode 100644
index ed88288..0000000
--- a/contrib/pg_buffercache/pg_buffercache_pages.c
+++ /dev/null
@@ -1,219 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * pg_buffercache_pages.c
- *	  display some contents of the buffer cache
- *
- *	  contrib/pg_buffercache/pg_buffercache_pages.c
- *-------------------------------------------------------------------------
- */
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "catalog/pg_type.h"
-#include "funcapi.h"
-#include "storage/buf_internals.h"
-#include "storage/bufmgr.h"
-#include "utils/relcache.h"
-
-
-#define NUM_BUFFERCACHE_PAGES_ELEM	8
-
-PG_MODULE_MAGIC;
-
-Datum		pg_buffercache_pages(PG_FUNCTION_ARGS);
-
-
-/*
- * Record structure holding the to be exposed cache data.
- */
-typedef struct
-{
-	uint32		bufferid;
-	Oid			relfilenode;
-	Oid			reltablespace;
-	Oid			reldatabase;
-	ForkNumber	forknum;
-	BlockNumber blocknum;
-	bool		isvalid;
-	bool		isdirty;
-	uint16		usagecount;
-} BufferCachePagesRec;
-
-
-/*
- * Function context for data persisting over repeated calls.
- */
-typedef struct
-{
-	TupleDesc	tupdesc;
-	BufferCachePagesRec *record;
-} BufferCachePagesContext;
-
-
-/*
- * Function returning data from the shared buffer cache - buffer number,
- * relation node/tablespace/database/blocknum and dirty indicator.
- */
-PG_FUNCTION_INFO_V1(pg_buffercache_pages);
-
-Datum
-pg_buffercache_pages(PG_FUNCTION_ARGS)
-{
-	FuncCallContext *funcctx;
-	Datum		result;
-	MemoryContext oldcontext;
-	BufferCachePagesContext *fctx;		/* User function context. */
-	TupleDesc	tupledesc;
-	HeapTuple	tuple;
-
-	if (SRF_IS_FIRSTCALL())
-	{
-		int			i;
-		volatile BufferDesc *bufHdr;
-
-		funcctx = SRF_FIRSTCALL_INIT();
-
-		/* Switch context when allocating stuff to be used in later calls */
-		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
-
-		/* Create a user function context for cross-call persistence */
-		fctx = (BufferCachePagesContext *) palloc(sizeof(BufferCachePagesContext));
-
-		/* Construct a tuple descriptor for the result rows. */
-		tupledesc = CreateTemplateTupleDesc(NUM_BUFFERCACHE_PAGES_ELEM, false);
-		TupleDescInitEntry(tupledesc, (AttrNumber) 1, "bufferid",
-						   INT4OID, -1, 0);
-		TupleDescInitEntry(tupledesc, (AttrNumber) 2, "relfilenode",
-						   OIDOID, -1, 0);
-		TupleDescInitEntry(tupledesc, (AttrNumber) 3, "reltablespace",
-						   OIDOID, -1, 0);
-		TupleDescInitEntry(tupledesc, (AttrNumber) 4, "reldatabase",
-						   OIDOID, -1, 0);
-		TupleDescInitEntry(tupledesc, (AttrNumber) 5, "relforknumber",
-						   INT2OID, -1, 0);
-		TupleDescInitEntry(tupledesc, (AttrNumber) 6, "relblocknumber",
-						   INT8OID, -1, 0);
-		TupleDescInitEntry(tupledesc, (AttrNumber) 7, "isdirty",
-						   BOOLOID, -1, 0);
-		TupleDescInitEntry(tupledesc, (AttrNumber) 8, "usage_count",
-						   INT2OID, -1, 0);
-
-		fctx->tupdesc = BlessTupleDesc(tupledesc);
-
-		/* Allocate NBuffers worth of BufferCachePagesRec records. */
-		fctx->record = (BufferCachePagesRec *) palloc(sizeof(BufferCachePagesRec) * NBuffers);
-
-		/* Set max calls and remember the user function context. */
-		funcctx->max_calls = NBuffers;
-		funcctx->user_fctx = fctx;
-
-		/* Return to original context when allocating transient memory */
-		MemoryContextSwitchTo(oldcontext);
-
-		/*
-		 * To get a consistent picture of the buffer state, we must lock all
-		 * partitions of the buffer map.  Needless to say, this is horrible
-		 * for concurrency.  Must grab locks in increasing order to avoid
-		 * possible deadlocks.
-		 */
-		for (i = 0; i < NUM_BUFFER_PARTITIONS; i++)
-			LWLockAcquire(FirstBufMappingLock + i, LW_SHARED);
-
-		/*
-		 * Scan though all the buffers, saving the relevant fields in the
-		 * fctx->record structure.
-		 */
-		for (i = 0, bufHdr = BufferDescriptors; i < NBuffers; i++, bufHdr++)
-		{
-			/* Lock each buffer header before inspecting. */
-			LockBufHdr(bufHdr);
-
-			fctx->record[i].bufferid = BufferDescriptorGetBuffer(bufHdr);
-			fctx->record[i].relfilenode = bufHdr->tag.rnode.relNode;
-			fctx->record[i].reltablespace = bufHdr->tag.rnode.spcNode;
-			fctx->record[i].reldatabase = bufHdr->tag.rnode.dbNode;
-			fctx->record[i].forknum = bufHdr->tag.forkNum;
-			fctx->record[i].blocknum = bufHdr->tag.blockNum;
-			fctx->record[i].usagecount = bufHdr->usage_count;
-
-			if (bufHdr->flags & BM_DIRTY)
-				fctx->record[i].isdirty = true;
-			else
-				fctx->record[i].isdirty = false;
-
-			/* Note if the buffer is valid, and has storage created */
-			if ((bufHdr->flags & BM_VALID) && (bufHdr->flags & BM_TAG_VALID))
-				fctx->record[i].isvalid = true;
-			else
-				fctx->record[i].isvalid = false;
-
-			UnlockBufHdr(bufHdr);
-		}
-
-		/*
-		 * And release locks.  We do this in reverse order for two reasons:
-		 * (1) Anyone else who needs more than one of the locks will be trying
-		 * to lock them in increasing order; we don't want to release the
-		 * other process until it can get all the locks it needs. (2) This
-		 * avoids O(N^2) behavior inside LWLockRelease.
-		 */
-		for (i = NUM_BUFFER_PARTITIONS; --i >= 0;)
-			LWLockRelease(FirstBufMappingLock + i);
-	}
-
-	funcctx = SRF_PERCALL_SETUP();
-
-	/* Get the saved state */
-	fctx = funcctx->user_fctx;
-
-	if (funcctx->call_cntr < funcctx->max_calls)
-	{
-		uint32		i = funcctx->call_cntr;
-		Datum		values[NUM_BUFFERCACHE_PAGES_ELEM];
-		bool		nulls[NUM_BUFFERCACHE_PAGES_ELEM];
-
-		values[0] = Int32GetDatum(fctx->record[i].bufferid);
-		nulls[0] = false;
-
-		/*
-		 * Set all fields except the bufferid to null if the buffer is unused
-		 * or not valid.
-		 */
-		if (fctx->record[i].blocknum == InvalidBlockNumber ||
-			fctx->record[i].isvalid == false)
-		{
-			nulls[1] = true;
-			nulls[2] = true;
-			nulls[3] = true;
-			nulls[4] = true;
-			nulls[5] = true;
-			nulls[6] = true;
-			nulls[7] = true;
-		}
-		else
-		{
-			values[1] = ObjectIdGetDatum(fctx->record[i].relfilenode);
-			nulls[1] = false;
-			values[2] = ObjectIdGetDatum(fctx->record[i].reltablespace);
-			nulls[2] = false;
-			values[3] = ObjectIdGetDatum(fctx->record[i].reldatabase);
-			nulls[3] = false;
-			values[4] = ObjectIdGetDatum(fctx->record[i].forknum);
-			nulls[4] = false;
-			values[5] = Int64GetDatum((int64) fctx->record[i].blocknum);
-			nulls[5] = false;
-			values[6] = BoolGetDatum(fctx->record[i].isdirty);
-			nulls[6] = false;
-			values[7] = Int16GetDatum(fctx->record[i].usagecount);
-			nulls[7] = false;
-		}
-
-		/* Build and return the tuple. */
-		tuple = heap_form_tuple(fctx->tupdesc, values, nulls);
-		result = HeapTupleGetDatum(tuple);
-
-		SRF_RETURN_NEXT(funcctx, result);
-	}
-	else
-		SRF_RETURN_DONE(funcctx);
-}
diff --git a/contrib/pg_freespacemap/Makefile b/contrib/pg_freespacemap/Makefile
deleted file mode 100644
index b2e3ba3..0000000
--- a/contrib/pg_freespacemap/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pg_freespacemap/Makefile
-
-MODULE_big = pg_freespacemap
-OBJS = pg_freespacemap.o
-
-EXTENSION = pg_freespacemap
-DATA = pg_freespacemap--1.0.sql pg_freespacemap--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pg_freespacemap
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pg_freespacemap/pg_freespacemap--1.0.sql b/contrib/pg_freespacemap/pg_freespacemap--1.0.sql
deleted file mode 100644
index d63420e..0000000
--- a/contrib/pg_freespacemap/pg_freespacemap--1.0.sql
+++ /dev/null
@@ -1,22 +0,0 @@
-/* contrib/pg_freespacemap/pg_freespacemap--1.0.sql */
-
--- Register the C function.
-CREATE FUNCTION pg_freespace(regclass, bigint)
-RETURNS int2
-AS 'MODULE_PATHNAME', 'pg_freespace'
-LANGUAGE C STRICT;
-
--- pg_freespace shows the recorded space avail at each block in a relation
-CREATE FUNCTION
-  pg_freespace(rel regclass, blkno OUT bigint, avail OUT int2)
-RETURNS SETOF RECORD
-AS $$
-  SELECT blkno, pg_freespace($1, blkno) AS avail
-  FROM generate_series(0, pg_relation_size($1) / current_setting('block_size')::bigint - 1) AS blkno;
-$$
-LANGUAGE SQL;
-
-
--- Don't want these to be available to public.
-REVOKE ALL ON FUNCTION pg_freespace(regclass, bigint) FROM PUBLIC;
-REVOKE ALL ON FUNCTION pg_freespace(regclass) FROM PUBLIC;
diff --git a/contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql b/contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
deleted file mode 100644
index 4c7487f..0000000
--- a/contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
+++ /dev/null
@@ -1,4 +0,0 @@
-/* contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql */
-
-ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass,bigint);
-ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass);
diff --git a/contrib/pg_freespacemap/pg_freespacemap.c b/contrib/pg_freespacemap/pg_freespacemap.c
deleted file mode 100644
index bf6b0df..0000000
--- a/contrib/pg_freespacemap/pg_freespacemap.c
+++ /dev/null
@@ -1,46 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * pg_freespacemap.c
- *	  display contents of a free space map
- *
- *	  contrib/pg_freespacemap/pg_freespacemap.c
- *-------------------------------------------------------------------------
- */
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "funcapi.h"
-#include "storage/block.h"
-#include "storage/freespace.h"
-
-
-PG_MODULE_MAGIC;
-
-Datum		pg_freespace(PG_FUNCTION_ARGS);
-
-/*
- * Returns the amount of free space on a given page, according to the
- * free space map.
- */
-PG_FUNCTION_INFO_V1(pg_freespace);
-
-Datum
-pg_freespace(PG_FUNCTION_ARGS)
-{
-	Oid			relid = PG_GETARG_OID(0);
-	int64		blkno = PG_GETARG_INT64(1);
-	int16		freespace;
-	Relation	rel;
-
-	rel = relation_open(relid, AccessShareLock);
-
-	if (blkno < 0 || blkno > MaxBlockNumber)
-		ereport(ERROR,
-				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-				 errmsg("invalid block number")));
-
-	freespace = GetRecordedFreeSpace(rel, blkno);
-
-	relation_close(rel, AccessShareLock);
-	PG_RETURN_INT16(freespace);
-}
diff --git a/contrib/pg_freespacemap/pg_freespacemap.control b/contrib/pg_freespacemap/pg_freespacemap.control
deleted file mode 100644
index 34b695f..0000000
--- a/contrib/pg_freespacemap/pg_freespacemap.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pg_freespacemap extension
-comment = 'examine the free space map (FSM)'
-default_version = '1.0'
-module_pathname = '$libdir/pg_freespacemap'
-relocatable = true
diff --git a/contrib/pg_stat_statements/Makefile b/contrib/pg_stat_statements/Makefile
deleted file mode 100644
index e086fd8..0000000
--- a/contrib/pg_stat_statements/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pg_stat_statements/Makefile
-
-MODULE_big = pg_stat_statements
-OBJS = pg_stat_statements.o
-
-EXTENSION = pg_stat_statements
-DATA = pg_stat_statements--1.0.sql pg_stat_statements--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pg_stat_statements
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pg_stat_statements/pg_stat_statements--1.0.sql b/contrib/pg_stat_statements/pg_stat_statements--1.0.sql
deleted file mode 100644
index e17b82c..0000000
--- a/contrib/pg_stat_statements/pg_stat_statements--1.0.sql
+++ /dev/null
@@ -1,36 +0,0 @@
-/* contrib/pg_stat_statements/pg_stat_statements--1.0.sql */
-
--- Register functions.
-CREATE FUNCTION pg_stat_statements_reset()
-RETURNS void
-AS 'MODULE_PATHNAME'
-LANGUAGE C;
-
-CREATE FUNCTION pg_stat_statements(
-    OUT userid oid,
-    OUT dbid oid,
-    OUT query text,
-    OUT calls int8,
-    OUT total_time float8,
-    OUT rows int8,
-    OUT shared_blks_hit int8,
-    OUT shared_blks_read int8,
-    OUT shared_blks_written int8,
-    OUT local_blks_hit int8,
-    OUT local_blks_read int8,
-    OUT local_blks_written int8,
-    OUT temp_blks_read int8,
-    OUT temp_blks_written int8
-)
-RETURNS SETOF record
-AS 'MODULE_PATHNAME'
-LANGUAGE C;
-
--- Register a view on the function for ease of use.
-CREATE VIEW pg_stat_statements AS
-  SELECT * FROM pg_stat_statements();
-
-GRANT SELECT ON pg_stat_statements TO PUBLIC;
-
--- Don't want this to be available to non-superusers.
-REVOKE ALL ON FUNCTION pg_stat_statements_reset() FROM PUBLIC;
diff --git a/contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql b/contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
deleted file mode 100644
index 9dda85c..0000000
--- a/contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
+++ /dev/null
@@ -1,5 +0,0 @@
-/* contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql */
-
-ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements_reset();
-ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements();
-ALTER EXTENSION pg_stat_statements ADD view pg_stat_statements;
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
deleted file mode 100644
index 0236b87..0000000
--- a/contrib/pg_stat_statements/pg_stat_statements.c
+++ /dev/null
@@ -1,1046 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * pg_stat_statements.c
- *		Track statement execution times across a whole database cluster.
- *
- * Note about locking issues: to create or delete an entry in the shared
- * hashtable, one must hold pgss->lock exclusively.  Modifying any field
- * in an entry except the counters requires the same.  To look up an entry,
- * one must hold the lock shared.  To read or update the counters within
- * an entry, one must hold the lock shared or exclusive (so the entry doesn't
- * disappear!) and also take the entry's mutex spinlock.
- *
- *
- * Copyright (c) 2008-2011, PostgreSQL Global Development Group
- *
- * IDENTIFICATION
- *	  contrib/pg_stat_statements/pg_stat_statements.c
- *
- *-------------------------------------------------------------------------
- */
-#include "postgres.h"
-
-#include <unistd.h>
-
-#include "access/hash.h"
-#include "catalog/pg_type.h"
-#include "executor/executor.h"
-#include "executor/instrument.h"
-#include "funcapi.h"
-#include "mb/pg_wchar.h"
-#include "miscadmin.h"
-#include "pgstat.h"
-#include "storage/fd.h"
-#include "storage/ipc.h"
-#include "storage/spin.h"
-#include "tcop/utility.h"
-#include "utils/builtins.h"
-#include "utils/hsearch.h"
-#include "utils/guc.h"
-
-
-PG_MODULE_MAGIC;
-
-/* Location of stats file */
-#define PGSS_DUMP_FILE	"global/pg_stat_statements.stat"
-
-/* This constant defines the magic number in the stats file header */
-static const uint32 PGSS_FILE_HEADER = 0x20100108;
-
-/* XXX: Should USAGE_EXEC reflect execution time and/or buffer usage? */
-#define USAGE_EXEC(duration)	(1.0)
-#define USAGE_INIT				(1.0)	/* including initial planning */
-#define USAGE_DECREASE_FACTOR	(0.99)	/* decreased every entry_dealloc */
-#define USAGE_DEALLOC_PERCENT	5		/* free this % of entries at once */
-
-/*
- * Hashtable key that defines the identity of a hashtable entry.  The
- * hash comparators do not assume that the query string is null-terminated;
- * this lets us search for an mbcliplen'd string without copying it first.
- *
- * Presently, the query encoding is fully determined by the source database
- * and so we don't really need it to be in the key.  But that might not always
- * be true. Anyway it's notationally convenient to pass it as part of the key.
- */
-typedef struct pgssHashKey
-{
-	Oid			userid;			/* user OID */
-	Oid			dbid;			/* database OID */
-	int			encoding;		/* query encoding */
-	int			query_len;		/* # of valid bytes in query string */
-	const char *query_ptr;		/* query string proper */
-} pgssHashKey;
-
-/*
- * The actual stats counters kept within pgssEntry.
- */
-typedef struct Counters
-{
-	int64		calls;			/* # of times executed */
-	double		total_time;		/* total execution time in seconds */
-	int64		rows;			/* total # of retrieved or affected rows */
-	int64		shared_blks_hit;	/* # of shared buffer hits */
-	int64		shared_blks_read;		/* # of shared disk blocks read */
-	int64		shared_blks_written;	/* # of shared disk blocks written */
-	int64		local_blks_hit; /* # of local buffer hits */
-	int64		local_blks_read;	/* # of local disk blocks read */
-	int64		local_blks_written;		/* # of local disk blocks written */
-	int64		temp_blks_read; /* # of temp blocks read */
-	int64		temp_blks_written;		/* # of temp blocks written */
-	double		usage;			/* usage factor */
-} Counters;
-
-/*
- * Statistics per statement
- *
- * NB: see the file read/write code before changing field order here.
- */
-typedef struct pgssEntry
-{
-	pgssHashKey key;			/* hash key of entry - MUST BE FIRST */
-	Counters	counters;		/* the statistics for this query */
-	slock_t		mutex;			/* protects the counters only */
-	char		query[1];		/* VARIABLE LENGTH ARRAY - MUST BE LAST */
-	/* Note: the allocated length of query[] is actually pgss->query_size */
-} pgssEntry;
-
-/*
- * Global shared state
- */
-typedef struct pgssSharedState
-{
-	LWLockId	lock;			/* protects hashtable search/modification */
-	int			query_size;		/* max query length in bytes */
-} pgssSharedState;
-
-/*---- Local variables ----*/
-
-/* Current nesting depth of ExecutorRun calls */
-static int	nested_level = 0;
-
-/* Saved hook values in case of unload */
-static shmem_startup_hook_type prev_shmem_startup_hook = NULL;
-static ExecutorStart_hook_type prev_ExecutorStart = NULL;
-static ExecutorRun_hook_type prev_ExecutorRun = NULL;
-static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
-static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
-static ProcessUtility_hook_type prev_ProcessUtility = NULL;
-
-/* Links to shared memory state */
-static pgssSharedState *pgss = NULL;
-static HTAB *pgss_hash = NULL;
-
-/*---- GUC variables ----*/
-
-typedef enum
-{
-	PGSS_TRACK_NONE,			/* track no statements */
-	PGSS_TRACK_TOP,				/* only top level statements */
-	PGSS_TRACK_ALL				/* all statements, including nested ones */
-}	PGSSTrackLevel;
-
-static const struct config_enum_entry track_options[] =
-{
-	{"none", PGSS_TRACK_NONE, false},
-	{"top", PGSS_TRACK_TOP, false},
-	{"all", PGSS_TRACK_ALL, false},
-	{NULL, 0, false}
-};
-
-static int	pgss_max;			/* max # statements to track */
-static int	pgss_track;			/* tracking level */
-static bool pgss_track_utility; /* whether to track utility commands */
-static bool pgss_save;			/* whether to save stats across shutdown */
-
-
-#define pgss_enabled() \
-	(pgss_track == PGSS_TRACK_ALL || \
-	(pgss_track == PGSS_TRACK_TOP && nested_level == 0))
-
-/*---- Function declarations ----*/
-
-void		_PG_init(void);
-void		_PG_fini(void);
-
-Datum		pg_stat_statements_reset(PG_FUNCTION_ARGS);
-Datum		pg_stat_statements(PG_FUNCTION_ARGS);
-
-PG_FUNCTION_INFO_V1(pg_stat_statements_reset);
-PG_FUNCTION_INFO_V1(pg_stat_statements);
-
-static void pgss_shmem_startup(void);
-static void pgss_shmem_shutdown(int code, Datum arg);
-static void pgss_ExecutorStart(QueryDesc *queryDesc, int eflags);
-static void pgss_ExecutorRun(QueryDesc *queryDesc,
-				 ScanDirection direction,
-				 long count);
-static void pgss_ExecutorFinish(QueryDesc *queryDesc);
-static void pgss_ExecutorEnd(QueryDesc *queryDesc);
-static void pgss_ProcessUtility(Node *parsetree,
-			  const char *queryString, ParamListInfo params, bool isTopLevel,
-					DestReceiver *dest, char *completionTag);
-static uint32 pgss_hash_fn(const void *key, Size keysize);
-static int	pgss_match_fn(const void *key1, const void *key2, Size keysize);
-static void pgss_store(const char *query, double total_time, uint64 rows,
-		   const BufferUsage *bufusage);
-static Size pgss_memsize(void);
-static pgssEntry *entry_alloc(pgssHashKey *key);
-static void entry_dealloc(void);
-static void entry_reset(void);
-
-
-/*
- * Module load callback
- */
-void
-_PG_init(void)
-{
-	/*
-	 * In order to create our shared memory area, we have to be loaded via
-	 * shared_preload_libraries.  If not, fall out without hooking into any of
-	 * the main system.  (We don't throw error here because it seems useful to
-	 * allow the pg_stat_statements functions to be created even when the
-	 * module isn't active.  The functions must protect themselves against
-	 * being called then, however.)
-	 */
-	if (!process_shared_preload_libraries_in_progress)
-		return;
-
-	/*
-	 * Define (or redefine) custom GUC variables.
-	 */
-	DefineCustomIntVariable("pg_stat_statements.max",
-	  "Sets the maximum number of statements tracked by pg_stat_statements.",
-							NULL,
-							&pgss_max,
-							1000,
-							100,
-							INT_MAX,
-							PGC_POSTMASTER,
-							0,
-							NULL,
-							NULL,
-							NULL);
-
-	DefineCustomEnumVariable("pg_stat_statements.track",
-			   "Selects which statements are tracked by pg_stat_statements.",
-							 NULL,
-							 &pgss_track,
-							 PGSS_TRACK_TOP,
-							 track_options,
-							 PGC_SUSET,
-							 0,
-							 NULL,
-							 NULL,
-							 NULL);
-
-	DefineCustomBoolVariable("pg_stat_statements.track_utility",
-	   "Selects whether utility commands are tracked by pg_stat_statements.",
-							 NULL,
-							 &pgss_track_utility,
-							 true,
-							 PGC_SUSET,
-							 0,
-							 NULL,
-							 NULL,
-							 NULL);
-
-	DefineCustomBoolVariable("pg_stat_statements.save",
-			   "Save pg_stat_statements statistics across server shutdowns.",
-							 NULL,
-							 &pgss_save,
-							 true,
-							 PGC_SIGHUP,
-							 0,
-							 NULL,
-							 NULL,
-							 NULL);
-
-	EmitWarningsOnPlaceholders("pg_stat_statements");
-
-	/*
-	 * Request additional shared resources.  (These are no-ops if we're not in
-	 * the postmaster process.)  We'll allocate or attach to the shared
-	 * resources in pgss_shmem_startup().
-	 */
-	RequestAddinShmemSpace(pgss_memsize());
-	RequestAddinLWLocks(1);
-
-	/*
-	 * Install hooks.
-	 */
-	prev_shmem_startup_hook = shmem_startup_hook;
-	shmem_startup_hook = pgss_shmem_startup;
-	prev_ExecutorStart = ExecutorStart_hook;
-	ExecutorStart_hook = pgss_ExecutorStart;
-	prev_ExecutorRun = ExecutorRun_hook;
-	ExecutorRun_hook = pgss_ExecutorRun;
-	prev_ExecutorFinish = ExecutorFinish_hook;
-	ExecutorFinish_hook = pgss_ExecutorFinish;
-	prev_ExecutorEnd = ExecutorEnd_hook;
-	ExecutorEnd_hook = pgss_ExecutorEnd;
-	prev_ProcessUtility = ProcessUtility_hook;
-	ProcessUtility_hook = pgss_ProcessUtility;
-}
-
-/*
- * Module unload callback
- */
-void
-_PG_fini(void)
-{
-	/* Uninstall hooks. */
-	shmem_startup_hook = prev_shmem_startup_hook;
-	ExecutorStart_hook = prev_ExecutorStart;
-	ExecutorRun_hook = prev_ExecutorRun;
-	ExecutorFinish_hook = prev_ExecutorFinish;
-	ExecutorEnd_hook = prev_ExecutorEnd;
-	ProcessUtility_hook = prev_ProcessUtility;
-}
-
-/*
- * shmem_startup hook: allocate or attach to shared memory,
- * then load any pre-existing statistics from file.
- */
-static void
-pgss_shmem_startup(void)
-{
-	bool		found;
-	HASHCTL		info;
-	FILE	   *file;
-	uint32		header;
-	int32		num;
-	int32		i;
-	int			query_size;
-	int			buffer_size;
-	char	   *buffer = NULL;
-
-	if (prev_shmem_startup_hook)
-		prev_shmem_startup_hook();
-
-	/* reset in case this is a restart within the postmaster */
-	pgss = NULL;
-	pgss_hash = NULL;
-
-	/*
-	 * Create or attach to the shared memory state, including hash table
-	 */
-	LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
-
-	pgss = ShmemInitStruct("pg_stat_statements",
-						   sizeof(pgssSharedState),
-						   &found);
-
-	if (!found)
-	{
-		/* First time through ... */
-		pgss->lock = LWLockAssign();
-		pgss->query_size = pgstat_track_activity_query_size;
-	}
-
-	/* Be sure everyone agrees on the hash table entry size */
-	query_size = pgss->query_size;
-
-	memset(&info, 0, sizeof(info));
-	info.keysize = sizeof(pgssHashKey);
-	info.entrysize = offsetof(pgssEntry, query) +query_size;
-	info.hash = pgss_hash_fn;
-	info.match = pgss_match_fn;
-	pgss_hash = ShmemInitHash("pg_stat_statements hash",
-							  pgss_max, pgss_max,
-							  &info,
-							  HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
-
-	LWLockRelease(AddinShmemInitLock);
-
-	/*
-	 * If we're in the postmaster (or a standalone backend...), set up a shmem
-	 * exit hook to dump the statistics to disk.
-	 */
-	if (!IsUnderPostmaster)
-		on_shmem_exit(pgss_shmem_shutdown, (Datum) 0);
-
-	/*
-	 * Attempt to load old statistics from the dump file, if this is the first
-	 * time through and we weren't told not to.
-	 */
-	if (found || !pgss_save)
-		return;
-
-	/*
-	 * Note: we don't bother with locks here, because there should be no other
-	 * processes running when this code is reached.
-	 */
-	file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_R);
-	if (file == NULL)
-	{
-		if (errno == ENOENT)
-			return;				/* ignore not-found error */
-		goto error;
-	}
-
-	buffer_size = query_size;
-	buffer = (char *) palloc(buffer_size);
-
-	if (fread(&header, sizeof(uint32), 1, file) != 1 ||
-		header != PGSS_FILE_HEADER ||
-		fread(&num, sizeof(int32), 1, file) != 1)
-		goto error;
-
-	for (i = 0; i < num; i++)
-	{
-		pgssEntry	temp;
-		pgssEntry  *entry;
-
-		if (fread(&temp, offsetof(pgssEntry, mutex), 1, file) != 1)
-			goto error;
-
-		/* Encoding is the only field we can easily sanity-check */
-		if (!PG_VALID_BE_ENCODING(temp.key.encoding))
-			goto error;
-
-		/* Previous incarnation might have had a larger query_size */
-		if (temp.key.query_len >= buffer_size)
-		{
-			buffer = (char *) repalloc(buffer, temp.key.query_len + 1);
-			buffer_size = temp.key.query_len + 1;
-		}
-
-		if (fread(buffer, 1, temp.key.query_len, file) != temp.key.query_len)
-			goto error;
-		buffer[temp.key.query_len] = '\0';
-
-		/* Clip to available length if needed */
-		if (temp.key.query_len >= query_size)
-			temp.key.query_len = pg_encoding_mbcliplen(temp.key.encoding,
-													   buffer,
-													   temp.key.query_len,
-													   query_size - 1);
-		temp.key.query_ptr = buffer;
-
-		/* make the hashtable entry (discards old entries if too many) */
-		entry = entry_alloc(&temp.key);
-
-		/* copy in the actual stats */
-		entry->counters = temp.counters;
-	}
-
-	pfree(buffer);
-	FreeFile(file);
-	return;
-
-error:
-	ereport(LOG,
-			(errcode_for_file_access(),
-			 errmsg("could not read pg_stat_statement file \"%s\": %m",
-					PGSS_DUMP_FILE)));
-	if (buffer)
-		pfree(buffer);
-	if (file)
-		FreeFile(file);
-	/* If possible, throw away the bogus file; ignore any error */
-	unlink(PGSS_DUMP_FILE);
-}
-
-/*
- * shmem_shutdown hook: Dump statistics into file.
- *
- * Note: we don't bother with acquiring lock, because there should be no
- * other processes running when this is called.
- */
-static void
-pgss_shmem_shutdown(int code, Datum arg)
-{
-	FILE	   *file;
-	HASH_SEQ_STATUS hash_seq;
-	int32		num_entries;
-	pgssEntry  *entry;
-
-	/* Don't try to dump during a crash. */
-	if (code)
-		return;
-
-	/* Safety check ... shouldn't get here unless shmem is set up. */
-	if (!pgss || !pgss_hash)
-		return;
-
-	/* Don't dump if told not to. */
-	if (!pgss_save)
-		return;
-
-	file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_W);
-	if (file == NULL)
-		goto error;
-
-	if (fwrite(&PGSS_FILE_HEADER, sizeof(uint32), 1, file) != 1)
-		goto error;
-	num_entries = hash_get_num_entries(pgss_hash);
-	if (fwrite(&num_entries, sizeof(int32), 1, file) != 1)
-		goto error;
-
-	hash_seq_init(&hash_seq, pgss_hash);
-	while ((entry = hash_seq_search(&hash_seq)) != NULL)
-	{
-		int			len = entry->key.query_len;
-
-		if (fwrite(entry, offsetof(pgssEntry, mutex), 1, file) != 1 ||
-			fwrite(entry->query, 1, len, file) != len)
-			goto error;
-	}
-
-	if (FreeFile(file))
-	{
-		file = NULL;
-		goto error;
-	}
-
-	return;
-
-error:
-	ereport(LOG,
-			(errcode_for_file_access(),
-			 errmsg("could not write pg_stat_statement file \"%s\": %m",
-					PGSS_DUMP_FILE)));
-	if (file)
-		FreeFile(file);
-	unlink(PGSS_DUMP_FILE);
-}
-
-/*
- * ExecutorStart hook: start up tracking if needed
- */
-static void
-pgss_ExecutorStart(QueryDesc *queryDesc, int eflags)
-{
-	if (prev_ExecutorStart)
-		prev_ExecutorStart(queryDesc, eflags);
-	else
-		standard_ExecutorStart(queryDesc, eflags);
-
-	if (pgss_enabled())
-	{
-		/*
-		 * Set up to track total elapsed time in ExecutorRun.  Make sure the
-		 * space is allocated in the per-query context so it will go away at
-		 * ExecutorEnd.
-		 */
-		if (queryDesc->totaltime == NULL)
-		{
-			MemoryContext oldcxt;
-
-			oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
-			queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
-			MemoryContextSwitchTo(oldcxt);
-		}
-	}
-}
-
-/*
- * ExecutorRun hook: all we need do is track nesting depth
- */
-static void
-pgss_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
-{
-	nested_level++;
-	PG_TRY();
-	{
-		if (prev_ExecutorRun)
-			prev_ExecutorRun(queryDesc, direction, count);
-		else
-			standard_ExecutorRun(queryDesc, direction, count);
-		nested_level--;
-	}
-	PG_CATCH();
-	{
-		nested_level--;
-		PG_RE_THROW();
-	}
-	PG_END_TRY();
-}
-
-/*
- * ExecutorFinish hook: all we need do is track nesting depth
- */
-static void
-pgss_ExecutorFinish(QueryDesc *queryDesc)
-{
-	nested_level++;
-	PG_TRY();
-	{
-		if (prev_ExecutorFinish)
-			prev_ExecutorFinish(queryDesc);
-		else
-			standard_ExecutorFinish(queryDesc);
-		nested_level--;
-	}
-	PG_CATCH();
-	{
-		nested_level--;
-		PG_RE_THROW();
-	}
-	PG_END_TRY();
-}
-
-/*
- * ExecutorEnd hook: store results if needed
- */
-static void
-pgss_ExecutorEnd(QueryDesc *queryDesc)
-{
-	if (queryDesc->totaltime && pgss_enabled())
-	{
-		/*
-		 * Make sure stats accumulation is done.  (Note: it's okay if several
-		 * levels of hook all do this.)
-		 */
-		InstrEndLoop(queryDesc->totaltime);
-
-		pgss_store(queryDesc->sourceText,
-				   queryDesc->totaltime->total,
-				   queryDesc->estate->es_processed,
-				   &queryDesc->totaltime->bufusage);
-	}
-
-	if (prev_ExecutorEnd)
-		prev_ExecutorEnd(queryDesc);
-	else
-		standard_ExecutorEnd(queryDesc);
-}
-
-/*
- * ProcessUtility hook
- */
-static void
-pgss_ProcessUtility(Node *parsetree, const char *queryString,
-					ParamListInfo params, bool isTopLevel,
-					DestReceiver *dest, char *completionTag)
-{
-	if (pgss_track_utility && pgss_enabled())
-	{
-		instr_time	start;
-		instr_time	duration;
-		uint64		rows = 0;
-		BufferUsage bufusage;
-
-		bufusage = pgBufferUsage;
-		INSTR_TIME_SET_CURRENT(start);
-
-		nested_level++;
-		PG_TRY();
-		{
-			if (prev_ProcessUtility)
-				prev_ProcessUtility(parsetree, queryString, params,
-									isTopLevel, dest, completionTag);
-			else
-				standard_ProcessUtility(parsetree, queryString, params,
-										isTopLevel, dest, completionTag);
-			nested_level--;
-		}
-		PG_CATCH();
-		{
-			nested_level--;
-			PG_RE_THROW();
-		}
-		PG_END_TRY();
-
-		INSTR_TIME_SET_CURRENT(duration);
-		INSTR_TIME_SUBTRACT(duration, start);
-
-		/* parse command tag to retrieve the number of affected rows. */
-		if (completionTag &&
-			sscanf(completionTag, "COPY " UINT64_FORMAT, &rows) != 1)
-			rows = 0;
-
-		/* calc differences of buffer counters. */
-		bufusage.shared_blks_hit =
-			pgBufferUsage.shared_blks_hit - bufusage.shared_blks_hit;
-		bufusage.shared_blks_read =
-			pgBufferUsage.shared_blks_read - bufusage.shared_blks_read;
-		bufusage.shared_blks_written =
-			pgBufferUsage.shared_blks_written - bufusage.shared_blks_written;
-		bufusage.local_blks_hit =
-			pgBufferUsage.local_blks_hit - bufusage.local_blks_hit;
-		bufusage.local_blks_read =
-			pgBufferUsage.local_blks_read - bufusage.local_blks_read;
-		bufusage.local_blks_written =
-			pgBufferUsage.local_blks_written - bufusage.local_blks_written;
-		bufusage.temp_blks_read =
-			pgBufferUsage.temp_blks_read - bufusage.temp_blks_read;
-		bufusage.temp_blks_written =
-			pgBufferUsage.temp_blks_written - bufusage.temp_blks_written;
-
-		pgss_store(queryString, INSTR_TIME_GET_DOUBLE(duration), rows,
-				   &bufusage);
-	}
-	else
-	{
-		if (prev_ProcessUtility)
-			prev_ProcessUtility(parsetree, queryString, params,
-								isTopLevel, dest, completionTag);
-		else
-			standard_ProcessUtility(parsetree, queryString, params,
-									isTopLevel, dest, completionTag);
-	}
-}
-
-/*
- * Calculate hash value for a key
- */
-static uint32
-pgss_hash_fn(const void *key, Size keysize)
-{
-	const pgssHashKey *k = (const pgssHashKey *) key;
-
-	/* we don't bother to include encoding in the hash */
-	return hash_uint32((uint32) k->userid) ^
-		hash_uint32((uint32) k->dbid) ^
-		DatumGetUInt32(hash_any((const unsigned char *) k->query_ptr,
-								k->query_len));
-}
-
-/*
- * Compare two keys - zero means match
- */
-static int
-pgss_match_fn(const void *key1, const void *key2, Size keysize)
-{
-	const pgssHashKey *k1 = (const pgssHashKey *) key1;
-	const pgssHashKey *k2 = (const pgssHashKey *) key2;
-
-	if (k1->userid == k2->userid &&
-		k1->dbid == k2->dbid &&
-		k1->encoding == k2->encoding &&
-		k1->query_len == k2->query_len &&
-		memcmp(k1->query_ptr, k2->query_ptr, k1->query_len) == 0)
-		return 0;
-	else
-		return 1;
-}
-
-/*
- * Store some statistics for a statement.
- */
-static void
-pgss_store(const char *query, double total_time, uint64 rows,
-		   const BufferUsage *bufusage)
-{
-	pgssHashKey key;
-	double		usage;
-	pgssEntry  *entry;
-
-	Assert(query != NULL);
-
-	/* Safety check... */
-	if (!pgss || !pgss_hash)
-		return;
-
-	/* Set up key for hashtable search */
-	key.userid = GetUserId();
-	key.dbid = MyDatabaseId;
-	key.encoding = GetDatabaseEncoding();
-	key.query_len = strlen(query);
-	if (key.query_len >= pgss->query_size)
-		key.query_len = pg_encoding_mbcliplen(key.encoding,
-											  query,
-											  key.query_len,
-											  pgss->query_size - 1);
-	key.query_ptr = query;
-
-	usage = USAGE_EXEC(duration);
-
-	/* Lookup the hash table entry with shared lock. */
-	LWLockAcquire(pgss->lock, LW_SHARED);
-
-	entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND, NULL);
-	if (!entry)
-	{
-		/* Must acquire exclusive lock to add a new entry. */
-		LWLockRelease(pgss->lock);
-		LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
-		entry = entry_alloc(&key);
-	}
-
-	/* Grab the spinlock while updating the counters. */
-	{
-		volatile pgssEntry *e = (volatile pgssEntry *) entry;
-
-		SpinLockAcquire(&e->mutex);
-		e->counters.calls += 1;
-		e->counters.total_time += total_time;
-		e->counters.rows += rows;
-		e->counters.shared_blks_hit += bufusage->shared_blks_hit;
-		e->counters.shared_blks_read += bufusage->shared_blks_read;
-		e->counters.shared_blks_written += bufusage->shared_blks_written;
-		e->counters.local_blks_hit += bufusage->local_blks_hit;
-		e->counters.local_blks_read += bufusage->local_blks_read;
-		e->counters.local_blks_written += bufusage->local_blks_written;
-		e->counters.temp_blks_read += bufusage->temp_blks_read;
-		e->counters.temp_blks_written += bufusage->temp_blks_written;
-		e->counters.usage += usage;
-		SpinLockRelease(&e->mutex);
-	}
-
-	LWLockRelease(pgss->lock);
-}
-
-/*
- * Reset all statement statistics.
- */
-Datum
-pg_stat_statements_reset(PG_FUNCTION_ARGS)
-{
-	if (!pgss || !pgss_hash)
-		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
-	entry_reset();
-	PG_RETURN_VOID();
-}
-
-#define PG_STAT_STATEMENTS_COLS		14
-
-/*
- * Retrieve statement statistics.
- */
-Datum
-pg_stat_statements(PG_FUNCTION_ARGS)
-{
-	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
-	TupleDesc	tupdesc;
-	Tuplestorestate *tupstore;
-	MemoryContext per_query_ctx;
-	MemoryContext oldcontext;
-	Oid			userid = GetUserId();
-	bool		is_superuser = superuser();
-	HASH_SEQ_STATUS hash_seq;
-	pgssEntry  *entry;
-
-	if (!pgss || !pgss_hash)
-		ereport(ERROR,
-				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
-				 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
-
-	/* check to see if caller supports us returning a tuplestore */
-	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
-		ereport(ERROR,
-				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-				 errmsg("set-valued function called in context that cannot accept a set")));
-	if (!(rsinfo->allowedModes & SFRM_Materialize))
-		ereport(ERROR,
-				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-				 errmsg("materialize mode required, but it is not " \
-						"allowed in this context")));
-
-	/* Build a tuple descriptor for our result type */
-	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
-		elog(ERROR, "return type must be a row type");
-
-	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
-	oldcontext = MemoryContextSwitchTo(per_query_ctx);
-
-	tupstore = tuplestore_begin_heap(true, false, work_mem);
-	rsinfo->returnMode = SFRM_Materialize;
-	rsinfo->setResult = tupstore;
-	rsinfo->setDesc = tupdesc;
-
-	MemoryContextSwitchTo(oldcontext);
-
-	LWLockAcquire(pgss->lock, LW_SHARED);
-
-	hash_seq_init(&hash_seq, pgss_hash);
-	while ((entry = hash_seq_search(&hash_seq)) != NULL)
-	{
-		Datum		values[PG_STAT_STATEMENTS_COLS];
-		bool		nulls[PG_STAT_STATEMENTS_COLS];
-		int			i = 0;
-		Counters	tmp;
-
-		memset(values, 0, sizeof(values));
-		memset(nulls, 0, sizeof(nulls));
-
-		values[i++] = ObjectIdGetDatum(entry->key.userid);
-		values[i++] = ObjectIdGetDatum(entry->key.dbid);
-
-		if (is_superuser || entry->key.userid == userid)
-		{
-			char	   *qstr;
-
-			qstr = (char *)
-				pg_do_encoding_conversion((unsigned char *) entry->query,
-										  entry->key.query_len,
-										  entry->key.encoding,
-										  GetDatabaseEncoding());
-			values[i++] = CStringGetTextDatum(qstr);
-			if (qstr != entry->query)
-				pfree(qstr);
-		}
-		else
-			values[i++] = CStringGetTextDatum("<insufficient privilege>");
-
-		/* copy counters to a local variable to keep locking time short */
-		{
-			volatile pgssEntry *e = (volatile pgssEntry *) entry;
-
-			SpinLockAcquire(&e->mutex);
-			tmp = e->counters;
-			SpinLockRelease(&e->mutex);
-		}
-
-		values[i++] = Int64GetDatumFast(tmp.calls);
-		values[i++] = Float8GetDatumFast(tmp.total_time);
-		values[i++] = Int64GetDatumFast(tmp.rows);
-		values[i++] = Int64GetDatumFast(tmp.shared_blks_hit);
-		values[i++] = Int64GetDatumFast(tmp.shared_blks_read);
-		values[i++] = Int64GetDatumFast(tmp.shared_blks_written);
-		values[i++] = Int64GetDatumFast(tmp.local_blks_hit);
-		values[i++] = Int64GetDatumFast(tmp.local_blks_read);
-		values[i++] = Int64GetDatumFast(tmp.local_blks_written);
-		values[i++] = Int64GetDatumFast(tmp.temp_blks_read);
-		values[i++] = Int64GetDatumFast(tmp.temp_blks_written);
-
-		Assert(i == PG_STAT_STATEMENTS_COLS);
-
-		tuplestore_putvalues(tupstore, tupdesc, values, nulls);
-	}
-
-	LWLockRelease(pgss->lock);
-
-	/* clean up and return the tuplestore */
-	tuplestore_donestoring(tupstore);
-
-	return (Datum) 0;
-}
-
-/*
- * Estimate shared memory space needed.
- */
-static Size
-pgss_memsize(void)
-{
-	Size		size;
-	Size		entrysize;
-
-	size = MAXALIGN(sizeof(pgssSharedState));
-	entrysize = offsetof(pgssEntry, query) +pgstat_track_activity_query_size;
-	size = add_size(size, hash_estimate_size(pgss_max, entrysize));
-
-	return size;
-}
-
-/*
- * Allocate a new hashtable entry.
- * caller must hold an exclusive lock on pgss->lock
- *
- * Note: despite needing exclusive lock, it's not an error for the target
- * entry to already exist.	This is because pgss_store releases and
- * reacquires lock after failing to find a match; so someone else could
- * have made the entry while we waited to get exclusive lock.
- */
-static pgssEntry *
-entry_alloc(pgssHashKey *key)
-{
-	pgssEntry  *entry;
-	bool		found;
-
-	/* Caller must have clipped query properly */
-	Assert(key->query_len < pgss->query_size);
-
-	/* Make space if needed */
-	while (hash_get_num_entries(pgss_hash) >= pgss_max)
-		entry_dealloc();
-
-	/* Find or create an entry with desired hash code */
-	entry = (pgssEntry *) hash_search(pgss_hash, key, HASH_ENTER, &found);
-
-	if (!found)
-	{
-		/* New entry, initialize it */
-
-		/* dynahash tried to copy the key for us, but must fix query_ptr */
-		entry->key.query_ptr = entry->query;
-		/* reset the statistics */
-		memset(&entry->counters, 0, sizeof(Counters));
-		entry->counters.usage = USAGE_INIT;
-		/* re-initialize the mutex each time ... we assume no one using it */
-		SpinLockInit(&entry->mutex);
-		/* ... and don't forget the query text */
-		memcpy(entry->query, key->query_ptr, key->query_len);
-		entry->query[key->query_len] = '\0';
-	}
-
-	return entry;
-}
-
-/*
- * qsort comparator for sorting into increasing usage order
- */
-static int
-entry_cmp(const void *lhs, const void *rhs)
-{
-	double		l_usage = (*(const pgssEntry **) lhs)->counters.usage;
-	double		r_usage = (*(const pgssEntry **) rhs)->counters.usage;
-
-	if (l_usage < r_usage)
-		return -1;
-	else if (l_usage > r_usage)
-		return +1;
-	else
-		return 0;
-}
-
-/*
- * Deallocate least used entries.
- * Caller must hold an exclusive lock on pgss->lock.
- */
-static void
-entry_dealloc(void)
-{
-	HASH_SEQ_STATUS hash_seq;
-	pgssEntry **entries;
-	pgssEntry  *entry;
-	int			nvictims;
-	int			i;
-
-	/* Sort entries by usage and deallocate USAGE_DEALLOC_PERCENT of them. */
-
-	entries = palloc(hash_get_num_entries(pgss_hash) * sizeof(pgssEntry *));
-
-	i = 0;
-	hash_seq_init(&hash_seq, pgss_hash);
-	while ((entry = hash_seq_search(&hash_seq)) != NULL)
-	{
-		entries[i++] = entry;
-		entry->counters.usage *= USAGE_DECREASE_FACTOR;
-	}
-
-	qsort(entries, i, sizeof(pgssEntry *), entry_cmp);
-	nvictims = Max(10, i * USAGE_DEALLOC_PERCENT / 100);
-	nvictims = Min(nvictims, i);
-
-	for (i = 0; i < nvictims; i++)
-	{
-		hash_search(pgss_hash, &entries[i]->key, HASH_REMOVE, NULL);
-	}
-
-	pfree(entries);
-}
-
-/*
- * Release all entries.
- */
-static void
-entry_reset(void)
-{
-	HASH_SEQ_STATUS hash_seq;
-	pgssEntry  *entry;
-
-	LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
-
-	hash_seq_init(&hash_seq, pgss_hash);
-	while ((entry = hash_seq_search(&hash_seq)) != NULL)
-	{
-		hash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);
-	}
-
-	LWLockRelease(pgss->lock);
-}
diff --git a/contrib/pg_stat_statements/pg_stat_statements.control b/contrib/pg_stat_statements/pg_stat_statements.control
deleted file mode 100644
index 6f9a947..0000000
--- a/contrib/pg_stat_statements/pg_stat_statements.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pg_stat_statements extension
-comment = 'track execution statistics of all SQL statements executed'
-default_version = '1.0'
-module_pathname = '$libdir/pg_stat_statements'
-relocatable = true
diff --git a/contrib/pgrowlocks/Makefile b/contrib/pgrowlocks/Makefile
deleted file mode 100644
index f56389b..0000000
--- a/contrib/pgrowlocks/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pgrowlocks/Makefile
-
-MODULE_big	= pgrowlocks
-OBJS		= pgrowlocks.o
-
-EXTENSION = pgrowlocks
-DATA = pgrowlocks--1.0.sql pgrowlocks--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pgrowlocks
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pgrowlocks/pgrowlocks--1.0.sql b/contrib/pgrowlocks/pgrowlocks--1.0.sql
deleted file mode 100644
index 0b60fdc..0000000
--- a/contrib/pgrowlocks/pgrowlocks--1.0.sql
+++ /dev/null
@@ -1,12 +0,0 @@
-/* contrib/pgrowlocks/pgrowlocks--1.0.sql */
-
-CREATE FUNCTION pgrowlocks(IN relname text,
-    OUT locked_row TID,		-- row TID
-    OUT lock_type TEXT,		-- lock type
-    OUT locker XID,		-- locking XID
-    OUT multi bool,		-- multi XID?
-    OUT xids xid[],		-- multi XIDs
-    OUT pids INTEGER[])		-- locker's process id
-RETURNS SETOF record
-AS 'MODULE_PATHNAME', 'pgrowlocks'
-LANGUAGE C STRICT;
diff --git a/contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql b/contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
deleted file mode 100644
index 2d9d1ee..0000000
--- a/contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
+++ /dev/null
@@ -1,3 +0,0 @@
-/* contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql */
-
-ALTER EXTENSION pgrowlocks ADD function pgrowlocks(text);
diff --git a/contrib/pgrowlocks/pgrowlocks.c b/contrib/pgrowlocks/pgrowlocks.c
deleted file mode 100644
index 302bb5c..0000000
--- a/contrib/pgrowlocks/pgrowlocks.c
+++ /dev/null
@@ -1,220 +0,0 @@
-/*
- * contrib/pgrowlocks/pgrowlocks.c
- *
- * Copyright (c) 2005-2006	Tatsuo Ishii
- *
- * Permission to use, copy, modify, and distribute this software and
- * its documentation for any purpose, without fee, and without a
- * written agreement is hereby granted, provided that the above
- * copyright notice and this paragraph and the following two
- * paragraphs appear in all copies.
- *
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
- * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
- * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
- * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
- * OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
- * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
- * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
- */
-
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "access/multixact.h"
-#include "access/relscan.h"
-#include "access/xact.h"
-#include "catalog/namespace.h"
-#include "funcapi.h"
-#include "miscadmin.h"
-#include "storage/bufmgr.h"
-#include "storage/procarray.h"
-#include "utils/acl.h"
-#include "utils/builtins.h"
-#include "utils/tqual.h"
-
-
-PG_MODULE_MAGIC;
-
-PG_FUNCTION_INFO_V1(pgrowlocks);
-
-extern Datum pgrowlocks(PG_FUNCTION_ARGS);
-
-/* ----------
- * pgrowlocks:
- * returns tids of rows being locked
- * ----------
- */
-
-#define NCHARS 32
-
-typedef struct
-{
-	Relation	rel;
-	HeapScanDesc scan;
-	int			ncolumns;
-} MyData;
-
-Datum
-pgrowlocks(PG_FUNCTION_ARGS)
-{
-	FuncCallContext *funcctx;
-	HeapScanDesc scan;
-	HeapTuple	tuple;
-	TupleDesc	tupdesc;
-	AttInMetadata *attinmeta;
-	Datum		result;
-	MyData	   *mydata;
-	Relation	rel;
-
-	if (SRF_IS_FIRSTCALL())
-	{
-		text	   *relname;
-		RangeVar   *relrv;
-		MemoryContext oldcontext;
-		AclResult	aclresult;
-
-		funcctx = SRF_FIRSTCALL_INIT();
-		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
-
-		/* Build a tuple descriptor for our result type */
-		if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
-			elog(ERROR, "return type must be a row type");
-
-		attinmeta = TupleDescGetAttInMetadata(tupdesc);
-		funcctx->attinmeta = attinmeta;
-
-		relname = PG_GETARG_TEXT_P(0);
-		relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-		rel = heap_openrv(relrv, AccessShareLock);
-
-		/* check permissions: must have SELECT on table */
-		aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(),
-									  ACL_SELECT);
-		if (aclresult != ACLCHECK_OK)
-			aclcheck_error(aclresult, ACL_KIND_CLASS,
-						   RelationGetRelationName(rel));
-
-		scan = heap_beginscan(rel, SnapshotNow, 0, NULL);
-		mydata = palloc(sizeof(*mydata));
-		mydata->rel = rel;
-		mydata->scan = scan;
-		mydata->ncolumns = tupdesc->natts;
-		funcctx->user_fctx = mydata;
-
-		MemoryContextSwitchTo(oldcontext);
-	}
-
-	funcctx = SRF_PERCALL_SETUP();
-	attinmeta = funcctx->attinmeta;
-	mydata = (MyData *) funcctx->user_fctx;
-	scan = mydata->scan;
-
-	/* scan the relation */
-	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
-	{
-		/* must hold a buffer lock to call HeapTupleSatisfiesUpdate */
-		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
-
-		if (HeapTupleSatisfiesUpdate(tuple->t_data,
-									 GetCurrentCommandId(false),
-									 scan->rs_cbuf) == HeapTupleBeingUpdated)
-		{
-
-			char	  **values;
-			int			i;
-
-			values = (char **) palloc(mydata->ncolumns * sizeof(char *));
-
-			i = 0;
-			values[i++] = (char *) DirectFunctionCall1(tidout, PointerGetDatum(&tuple->t_self));
-
-			if (tuple->t_data->t_infomask & HEAP_XMAX_SHARED_LOCK)
-				values[i++] = pstrdup("Shared");
-			else
-				values[i++] = pstrdup("Exclusive");
-			values[i] = palloc(NCHARS * sizeof(char));
-			snprintf(values[i++], NCHARS, "%d", HeapTupleHeaderGetXmax(tuple->t_data));
-			if (tuple->t_data->t_infomask & HEAP_XMAX_IS_MULTI)
-			{
-				TransactionId *xids;
-				int			nxids;
-				int			j;
-				int			isValidXid = 0;		/* any valid xid ever exists? */
-
-				values[i++] = pstrdup("true");
-				nxids = GetMultiXactIdMembers(HeapTupleHeaderGetXmax(tuple->t_data), &xids);
-				if (nxids == -1)
-				{
-					elog(ERROR, "GetMultiXactIdMembers returns error");
-				}
-
-				values[i] = palloc(NCHARS * nxids);
-				values[i + 1] = palloc(NCHARS * nxids);
-				strcpy(values[i], "{");
-				strcpy(values[i + 1], "{");
-
-				for (j = 0; j < nxids; j++)
-				{
-					char		buf[NCHARS];
-
-					if (TransactionIdIsInProgress(xids[j]))
-					{
-						if (isValidXid)
-						{
-							strcat(values[i], ",");
-							strcat(values[i + 1], ",");
-						}
-						snprintf(buf, NCHARS, "%d", xids[j]);
-						strcat(values[i], buf);
-						snprintf(buf, NCHARS, "%d", BackendXidGetPid(xids[j]));
-						strcat(values[i + 1], buf);
-
-						isValidXid = 1;
-					}
-				}
-
-				strcat(values[i], "}");
-				strcat(values[i + 1], "}");
-				i++;
-			}
-			else
-			{
-				values[i++] = pstrdup("false");
-				values[i] = palloc(NCHARS * sizeof(char));
-				snprintf(values[i++], NCHARS, "{%d}", HeapTupleHeaderGetXmax(tuple->t_data));
-
-				values[i] = palloc(NCHARS * sizeof(char));
-				snprintf(values[i++], NCHARS, "{%d}", BackendXidGetPid(HeapTupleHeaderGetXmax(tuple->t_data)));
-			}
-
-			LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
-
-			/* build a tuple */
-			tuple = BuildTupleFromCStrings(attinmeta, values);
-
-			/* make the tuple into a datum */
-			result = HeapTupleGetDatum(tuple);
-
-			/* Clean up */
-			for (i = 0; i < mydata->ncolumns; i++)
-				pfree(values[i]);
-			pfree(values);
-
-			SRF_RETURN_NEXT(funcctx, result);
-		}
-		else
-		{
-			LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
-		}
-	}
-
-	heap_endscan(scan);
-	heap_close(mydata->rel, AccessShareLock);
-
-	SRF_RETURN_DONE(funcctx);
-}
diff --git a/contrib/pgrowlocks/pgrowlocks.control b/contrib/pgrowlocks/pgrowlocks.control
deleted file mode 100644
index a6ba164..0000000
--- a/contrib/pgrowlocks/pgrowlocks.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pgrowlocks extension
-comment = 'show row-level locking information'
-default_version = '1.0'
-module_pathname = '$libdir/pgrowlocks'
-relocatable = true
diff --git a/contrib/pgstattuple/Makefile b/contrib/pgstattuple/Makefile
deleted file mode 100644
index 13b8709..0000000
--- a/contrib/pgstattuple/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pgstattuple/Makefile
-
-MODULE_big	= pgstattuple
-OBJS		= pgstattuple.o pgstatindex.o
-
-EXTENSION = pgstattuple
-DATA = pgstattuple--1.0.sql pgstattuple--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pgstattuple
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pgstattuple/pgstatindex.c b/contrib/pgstattuple/pgstatindex.c
deleted file mode 100644
index fd2cc92..0000000
--- a/contrib/pgstattuple/pgstatindex.c
+++ /dev/null
@@ -1,282 +0,0 @@
-/*
- * contrib/pgstattuple/pgstatindex.c
- *
- *
- * pgstatindex
- *
- * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
- *
- * Permission to use, copy, modify, and distribute this software and
- * its documentation for any purpose, without fee, and without a
- * written agreement is hereby granted, provided that the above
- * copyright notice and this paragraph and the following two
- * paragraphs appear in all copies.
- *
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
- * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
- * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
- * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
- * OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
- * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
- * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
- */
-
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "access/nbtree.h"
-#include "catalog/namespace.h"
-#include "funcapi.h"
-#include "miscadmin.h"
-#include "storage/bufmgr.h"
-#include "utils/builtins.h"
-
-
-extern Datum pgstatindex(PG_FUNCTION_ARGS);
-extern Datum pg_relpages(PG_FUNCTION_ARGS);
-
-PG_FUNCTION_INFO_V1(pgstatindex);
-PG_FUNCTION_INFO_V1(pg_relpages);
-
-#define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
-#define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
-
-#define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
-		if ( !(FirstOffsetNumber <= (offnum) && \
-						(offnum) <= PageGetMaxOffsetNumber(pg)) ) \
-			 elog(ERROR, "page offset number out of range"); }
-
-/* note: BlockNumber is unsigned, hence can't be negative */
-#define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
-		if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
-			 elog(ERROR, "block number out of range"); }
-
-/* ------------------------------------------------
- * A structure for a whole btree index statistics
- * used by pgstatindex().
- * ------------------------------------------------
- */
-typedef struct BTIndexStat
-{
-	uint32		version;
-	uint32		level;
-	BlockNumber root_blkno;
-
-	uint64		root_pages;
-	uint64		internal_pages;
-	uint64		leaf_pages;
-	uint64		empty_pages;
-	uint64		deleted_pages;
-
-	uint64		max_avail;
-	uint64		free_space;
-
-	uint64		fragments;
-} BTIndexStat;
-
-/* ------------------------------------------------------
- * pgstatindex()
- *
- * Usage: SELECT * FROM pgstatindex('t1_pkey');
- * ------------------------------------------------------
- */
-Datum
-pgstatindex(PG_FUNCTION_ARGS)
-{
-	text	   *relname = PG_GETARG_TEXT_P(0);
-	Relation	rel;
-	RangeVar   *relrv;
-	Datum		result;
-	BlockNumber nblocks;
-	BlockNumber blkno;
-	BTIndexStat indexStat;
-
-	if (!superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 (errmsg("must be superuser to use pgstattuple functions"))));
-
-	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-	rel = relation_openrv(relrv, AccessShareLock);
-
-	if (!IS_INDEX(rel) || !IS_BTREE(rel))
-		elog(ERROR, "relation \"%s\" is not a btree index",
-			 RelationGetRelationName(rel));
-
-	/*
-	 * Reject attempts to read non-local temporary relations; we would be
-	 * likely to get wrong data since we have no visibility into the owning
-	 * session's local buffers.
-	 */
-	if (RELATION_IS_OTHER_TEMP(rel))
-		ereport(ERROR,
-				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-				 errmsg("cannot access temporary tables of other sessions")));
-
-	/*
-	 * Read metapage
-	 */
-	{
-		Buffer		buffer = ReadBuffer(rel, 0);
-		Page		page = BufferGetPage(buffer);
-		BTMetaPageData *metad = BTPageGetMeta(page);
-
-		indexStat.version = metad->btm_version;
-		indexStat.level = metad->btm_level;
-		indexStat.root_blkno = metad->btm_root;
-
-		ReleaseBuffer(buffer);
-	}
-
-	/* -- init counters -- */
-	indexStat.root_pages = 0;
-	indexStat.internal_pages = 0;
-	indexStat.leaf_pages = 0;
-	indexStat.empty_pages = 0;
-	indexStat.deleted_pages = 0;
-
-	indexStat.max_avail = 0;
-	indexStat.free_space = 0;
-
-	indexStat.fragments = 0;
-
-	/*
-	 * Scan all blocks except the metapage
-	 */
-	nblocks = RelationGetNumberOfBlocks(rel);
-
-	for (blkno = 1; blkno < nblocks; blkno++)
-	{
-		Buffer		buffer;
-		Page		page;
-		BTPageOpaque opaque;
-
-		/* Read and lock buffer */
-		buffer = ReadBuffer(rel, blkno);
-		LockBuffer(buffer, BUFFER_LOCK_SHARE);
-
-		page = BufferGetPage(buffer);
-		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
-
-		/* Determine page type, and update totals */
-
-		if (P_ISLEAF(opaque))
-		{
-			int			max_avail;
-
-			max_avail = BLCKSZ - (BLCKSZ - ((PageHeader) page)->pd_special + SizeOfPageHeaderData);
-			indexStat.max_avail += max_avail;
-			indexStat.free_space += PageGetFreeSpace(page);
-
-			indexStat.leaf_pages++;
-
-			/*
-			 * If the next leaf is on an earlier block, it means a
-			 * fragmentation.
-			 */
-			if (opaque->btpo_next != P_NONE && opaque->btpo_next < blkno)
-				indexStat.fragments++;
-		}
-		else if (P_ISDELETED(opaque))
-			indexStat.deleted_pages++;
-		else if (P_IGNORE(opaque))
-			indexStat.empty_pages++;
-		else if (P_ISROOT(opaque))
-			indexStat.root_pages++;
-		else
-			indexStat.internal_pages++;
-
-		/* Unlock and release buffer */
-		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-		ReleaseBuffer(buffer);
-	}
-
-	relation_close(rel, AccessShareLock);
-
-	/*----------------------------
-	 * Build a result tuple
-	 *----------------------------
-	 */
-	{
-		TupleDesc	tupleDesc;
-		int			j;
-		char	   *values[10];
-		HeapTuple	tuple;
-
-		/* Build a tuple descriptor for our result type */
-		if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
-			elog(ERROR, "return type must be a row type");
-
-		j = 0;
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, "%d", indexStat.version);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, "%d", indexStat.level);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, INT64_FORMAT,
-				 (indexStat.root_pages +
-				  indexStat.leaf_pages +
-				  indexStat.internal_pages +
-				  indexStat.deleted_pages +
-				  indexStat.empty_pages) * BLCKSZ);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, "%u", indexStat.root_blkno);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, INT64_FORMAT, indexStat.internal_pages);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, INT64_FORMAT, indexStat.leaf_pages);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, INT64_FORMAT, indexStat.empty_pages);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, INT64_FORMAT, indexStat.deleted_pages);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, "%.2f", 100.0 - (double) indexStat.free_space / (double) indexStat.max_avail * 100.0);
-		values[j] = palloc(32);
-		snprintf(values[j++], 32, "%.2f", (double) indexStat.fragments / (double) indexStat.leaf_pages * 100.0);
-
-		tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
-									   values);
-
-		result = HeapTupleGetDatum(tuple);
-	}
-
-	PG_RETURN_DATUM(result);
-}
-
-/* --------------------------------------------------------
- * pg_relpages()
- *
- * Get the number of pages of the table/index.
- *
- * Usage: SELECT pg_relpages('t1');
- *		  SELECT pg_relpages('t1_pkey');
- * --------------------------------------------------------
- */
-Datum
-pg_relpages(PG_FUNCTION_ARGS)
-{
-	text	   *relname = PG_GETARG_TEXT_P(0);
-	int64		relpages;
-	Relation	rel;
-	RangeVar   *relrv;
-
-	if (!superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 (errmsg("must be superuser to use pgstattuple functions"))));
-
-	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-	rel = relation_openrv(relrv, AccessShareLock);
-
-	/* note: this will work OK on non-local temp tables */
-
-	relpages = RelationGetNumberOfBlocks(rel);
-
-	relation_close(rel, AccessShareLock);
-
-	PG_RETURN_INT64(relpages);
-}
diff --git a/contrib/pgstattuple/pgstattuple--1.0.sql b/contrib/pgstattuple/pgstattuple--1.0.sql
deleted file mode 100644
index 83445ec..0000000
--- a/contrib/pgstattuple/pgstattuple--1.0.sql
+++ /dev/null
@@ -1,46 +0,0 @@
-/* contrib/pgstattuple/pgstattuple--1.0.sql */
-
-CREATE FUNCTION pgstattuple(IN relname text,
-    OUT table_len BIGINT,		-- physical table length in bytes
-    OUT tuple_count BIGINT,		-- number of live tuples
-    OUT tuple_len BIGINT,		-- total tuples length in bytes
-    OUT tuple_percent FLOAT8,		-- live tuples in %
-    OUT dead_tuple_count BIGINT,	-- number of dead tuples
-    OUT dead_tuple_len BIGINT,		-- total dead tuples length in bytes
-    OUT dead_tuple_percent FLOAT8,	-- dead tuples in %
-    OUT free_space BIGINT,		-- free space in bytes
-    OUT free_percent FLOAT8)		-- free space in %
-AS 'MODULE_PATHNAME', 'pgstattuple'
-LANGUAGE C STRICT;
-
-CREATE FUNCTION pgstattuple(IN reloid oid,
-    OUT table_len BIGINT,		-- physical table length in bytes
-    OUT tuple_count BIGINT,		-- number of live tuples
-    OUT tuple_len BIGINT,		-- total tuples length in bytes
-    OUT tuple_percent FLOAT8,		-- live tuples in %
-    OUT dead_tuple_count BIGINT,	-- number of dead tuples
-    OUT dead_tuple_len BIGINT,		-- total dead tuples length in bytes
-    OUT dead_tuple_percent FLOAT8,	-- dead tuples in %
-    OUT free_space BIGINT,		-- free space in bytes
-    OUT free_percent FLOAT8)		-- free space in %
-AS 'MODULE_PATHNAME', 'pgstattuplebyid'
-LANGUAGE C STRICT;
-
-CREATE FUNCTION pgstatindex(IN relname text,
-    OUT version INT,
-    OUT tree_level INT,
-    OUT index_size BIGINT,
-    OUT root_block_no BIGINT,
-    OUT internal_pages BIGINT,
-    OUT leaf_pages BIGINT,
-    OUT empty_pages BIGINT,
-    OUT deleted_pages BIGINT,
-    OUT avg_leaf_density FLOAT8,
-    OUT leaf_fragmentation FLOAT8)
-AS 'MODULE_PATHNAME', 'pgstatindex'
-LANGUAGE C STRICT;
-
-CREATE FUNCTION pg_relpages(IN relname text)
-RETURNS BIGINT
-AS 'MODULE_PATHNAME', 'pg_relpages'
-LANGUAGE C STRICT;
diff --git a/contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql b/contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql
deleted file mode 100644
index 3cfb8db..0000000
--- a/contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql
+++ /dev/null
@@ -1,6 +0,0 @@
-/* contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql */
-
-ALTER EXTENSION pgstattuple ADD function pgstattuple(text);
-ALTER EXTENSION pgstattuple ADD function pgstattuple(oid);
-ALTER EXTENSION pgstattuple ADD function pgstatindex(text);
-ALTER EXTENSION pgstattuple ADD function pg_relpages(text);
diff --git a/contrib/pgstattuple/pgstattuple.c b/contrib/pgstattuple/pgstattuple.c
deleted file mode 100644
index e5ddd87..0000000
--- a/contrib/pgstattuple/pgstattuple.c
+++ /dev/null
@@ -1,518 +0,0 @@
-/*
- * contrib/pgstattuple/pgstattuple.c
- *
- * Copyright (c) 2001,2002	Tatsuo Ishii
- *
- * Permission to use, copy, modify, and distribute this software and
- * its documentation for any purpose, without fee, and without a
- * written agreement is hereby granted, provided that the above
- * copyright notice and this paragraph and the following two
- * paragraphs appear in all copies.
- *
- * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
- * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
- * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
- * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
- * OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
- * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
- * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
- */
-
-#include "postgres.h"
-
-#include "access/gist_private.h"
-#include "access/hash.h"
-#include "access/nbtree.h"
-#include "access/relscan.h"
-#include "catalog/namespace.h"
-#include "funcapi.h"
-#include "miscadmin.h"
-#include "storage/bufmgr.h"
-#include "storage/lmgr.h"
-#include "utils/builtins.h"
-#include "utils/tqual.h"
-
-
-PG_MODULE_MAGIC;
-
-PG_FUNCTION_INFO_V1(pgstattuple);
-PG_FUNCTION_INFO_V1(pgstattuplebyid);
-
-extern Datum pgstattuple(PG_FUNCTION_ARGS);
-extern Datum pgstattuplebyid(PG_FUNCTION_ARGS);
-
-/*
- * struct pgstattuple_type
- *
- * tuple_percent, dead_tuple_percent and free_percent are computable,
- * so not defined here.
- */
-typedef struct pgstattuple_type
-{
-	uint64		table_len;
-	uint64		tuple_count;
-	uint64		tuple_len;
-	uint64		dead_tuple_count;
-	uint64		dead_tuple_len;
-	uint64		free_space;		/* free/reusable space in bytes */
-} pgstattuple_type;
-
-typedef void (*pgstat_page) (pgstattuple_type *, Relation, BlockNumber);
-
-static Datum build_pgstattuple_type(pgstattuple_type *stat,
-					   FunctionCallInfo fcinfo);
-static Datum pgstat_relation(Relation rel, FunctionCallInfo fcinfo);
-static Datum pgstat_heap(Relation rel, FunctionCallInfo fcinfo);
-static void pgstat_btree_page(pgstattuple_type *stat,
-				  Relation rel, BlockNumber blkno);
-static void pgstat_hash_page(pgstattuple_type *stat,
-				 Relation rel, BlockNumber blkno);
-static void pgstat_gist_page(pgstattuple_type *stat,
-				 Relation rel, BlockNumber blkno);
-static Datum pgstat_index(Relation rel, BlockNumber start,
-			 pgstat_page pagefn, FunctionCallInfo fcinfo);
-static void pgstat_index_page(pgstattuple_type *stat, Page page,
-				  OffsetNumber minoff, OffsetNumber maxoff);
-
-/*
- * build_pgstattuple_type -- build a pgstattuple_type tuple
- */
-static Datum
-build_pgstattuple_type(pgstattuple_type *stat, FunctionCallInfo fcinfo)
-{
-#define NCOLUMNS	9
-#define NCHARS		32
-
-	HeapTuple	tuple;
-	char	   *values[NCOLUMNS];
-	char		values_buf[NCOLUMNS][NCHARS];
-	int			i;
-	double		tuple_percent;
-	double		dead_tuple_percent;
-	double		free_percent;	/* free/reusable space in % */
-	TupleDesc	tupdesc;
-	AttInMetadata *attinmeta;
-
-	/* Build a tuple descriptor for our result type */
-	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
-		elog(ERROR, "return type must be a row type");
-
-	/*
-	 * Generate attribute metadata needed later to produce tuples from raw C
-	 * strings
-	 */
-	attinmeta = TupleDescGetAttInMetadata(tupdesc);
-
-	if (stat->table_len == 0)
-	{
-		tuple_percent = 0.0;
-		dead_tuple_percent = 0.0;
-		free_percent = 0.0;
-	}
-	else
-	{
-		tuple_percent = 100.0 * stat->tuple_len / stat->table_len;
-		dead_tuple_percent = 100.0 * stat->dead_tuple_len / stat->table_len;
-		free_percent = 100.0 * stat->free_space / stat->table_len;
-	}
-
-	/*
-	 * Prepare a values array for constructing the tuple. This should be an
-	 * array of C strings which will be processed later by the appropriate
-	 * "in" functions.
-	 */
-	for (i = 0; i < NCOLUMNS; i++)
-		values[i] = values_buf[i];
-	i = 0;
-	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->table_len);
-	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_count);
-	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_len);
-	snprintf(values[i++], NCHARS, "%.2f", tuple_percent);
-	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_count);
-	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_len);
-	snprintf(values[i++], NCHARS, "%.2f", dead_tuple_percent);
-	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->free_space);
-	snprintf(values[i++], NCHARS, "%.2f", free_percent);
-
-	/* build a tuple */
-	tuple = BuildTupleFromCStrings(attinmeta, values);
-
-	/* make the tuple into a datum */
-	return HeapTupleGetDatum(tuple);
-}
-
-/* ----------
- * pgstattuple:
- * returns live/dead tuples info
- *
- * C FUNCTION definition
- * pgstattuple(text) returns pgstattuple_type
- * see pgstattuple.sql for pgstattuple_type
- * ----------
- */
-
-Datum
-pgstattuple(PG_FUNCTION_ARGS)
-{
-	text	   *relname = PG_GETARG_TEXT_P(0);
-	RangeVar   *relrv;
-	Relation	rel;
-
-	if (!superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 (errmsg("must be superuser to use pgstattuple functions"))));
-
-	/* open relation */
-	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
-	rel = relation_openrv(relrv, AccessShareLock);
-
-	PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
-}
-
-Datum
-pgstattuplebyid(PG_FUNCTION_ARGS)
-{
-	Oid			relid = PG_GETARG_OID(0);
-	Relation	rel;
-
-	if (!superuser())
-		ereport(ERROR,
-				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
-				 (errmsg("must be superuser to use pgstattuple functions"))));
-
-	/* open relation */
-	rel = relation_open(relid, AccessShareLock);
-
-	PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
-}
-
-/*
- * pgstat_relation
- */
-static Datum
-pgstat_relation(Relation rel, FunctionCallInfo fcinfo)
-{
-	const char *err;
-
-	/*
-	 * Reject attempts to read non-local temporary relations; we would be
-	 * likely to get wrong data since we have no visibility into the owning
-	 * session's local buffers.
-	 */
-	if (RELATION_IS_OTHER_TEMP(rel))
-		ereport(ERROR,
-				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-				 errmsg("cannot access temporary tables of other sessions")));
-
-	switch (rel->rd_rel->relkind)
-	{
-		case RELKIND_RELATION:
-		case RELKIND_TOASTVALUE:
-		case RELKIND_UNCATALOGED:
-		case RELKIND_SEQUENCE:
-			return pgstat_heap(rel, fcinfo);
-		case RELKIND_INDEX:
-			switch (rel->rd_rel->relam)
-			{
-				case BTREE_AM_OID:
-					return pgstat_index(rel, BTREE_METAPAGE + 1,
-										pgstat_btree_page, fcinfo);
-				case HASH_AM_OID:
-					return pgstat_index(rel, HASH_METAPAGE + 1,
-										pgstat_hash_page, fcinfo);
-				case GIST_AM_OID:
-					return pgstat_index(rel, GIST_ROOT_BLKNO + 1,
-										pgstat_gist_page, fcinfo);
-				case GIN_AM_OID:
-					err = "gin index";
-					break;
-				default:
-					err = "unknown index";
-					break;
-			}
-			break;
-		case RELKIND_VIEW:
-			err = "view";
-			break;
-		case RELKIND_COMPOSITE_TYPE:
-			err = "composite type";
-			break;
-		case RELKIND_FOREIGN_TABLE:
-			err = "foreign table";
-			break;
-		default:
-			err = "unknown";
-			break;
-	}
-
-	ereport(ERROR,
-			(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
-			 errmsg("\"%s\" (%s) is not supported",
-					RelationGetRelationName(rel), err)));
-	return 0;					/* should not happen */
-}
-
-/*
- * pgstat_heap -- returns live/dead tuples info in a heap
- */
-static Datum
-pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
-{
-	HeapScanDesc scan;
-	HeapTuple	tuple;
-	BlockNumber nblocks;
-	BlockNumber block = 0;		/* next block to count free space in */
-	BlockNumber tupblock;
-	Buffer		buffer;
-	pgstattuple_type stat = {0};
-
-	/* Disable syncscan because we assume we scan from block zero upwards */
-	scan = heap_beginscan_strat(rel, SnapshotAny, 0, NULL, true, false);
-
-	nblocks = scan->rs_nblocks; /* # blocks to be scanned */
-
-	/* scan the relation */
-	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
-	{
-		CHECK_FOR_INTERRUPTS();
-
-		/* must hold a buffer lock to call HeapTupleSatisfiesVisibility */
-		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
-
-		if (HeapTupleSatisfiesVisibility(tuple, SnapshotNow, scan->rs_cbuf))
-		{
-			stat.tuple_len += tuple->t_len;
-			stat.tuple_count++;
-		}
-		else
-		{
-			stat.dead_tuple_len += tuple->t_len;
-			stat.dead_tuple_count++;
-		}
-
-		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
-
-		/*
-		 * To avoid physically reading the table twice, try to do the
-		 * free-space scan in parallel with the heap scan.	However,
-		 * heap_getnext may find no tuples on a given page, so we cannot
-		 * simply examine the pages returned by the heap scan.
-		 */
-		tupblock = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);
-
-		while (block <= tupblock)
-		{
-			CHECK_FOR_INTERRUPTS();
-
-			buffer = ReadBuffer(rel, block);
-			LockBuffer(buffer, BUFFER_LOCK_SHARE);
-			stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
-			UnlockReleaseBuffer(buffer);
-			block++;
-		}
-	}
-	heap_endscan(scan);
-
-	while (block < nblocks)
-	{
-		CHECK_FOR_INTERRUPTS();
-
-		buffer = ReadBuffer(rel, block);
-		LockBuffer(buffer, BUFFER_LOCK_SHARE);
-		stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
-		UnlockReleaseBuffer(buffer);
-		block++;
-	}
-
-	relation_close(rel, AccessShareLock);
-
-	stat.table_len = (uint64) nblocks *BLCKSZ;
-
-	return build_pgstattuple_type(&stat, fcinfo);
-}
-
-/*
- * pgstat_btree_page -- check tuples in a btree page
- */
-static void
-pgstat_btree_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
-{
-	Buffer		buf;
-	Page		page;
-
-	buf = ReadBuffer(rel, blkno);
-	LockBuffer(buf, BT_READ);
-	page = BufferGetPage(buf);
-
-	/* Page is valid, see what to do with it */
-	if (PageIsNew(page))
-	{
-		/* fully empty page */
-		stat->free_space += BLCKSZ;
-	}
-	else
-	{
-		BTPageOpaque opaque;
-
-		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
-		if (opaque->btpo_flags & (BTP_DELETED | BTP_HALF_DEAD))
-		{
-			/* recyclable page */
-			stat->free_space += BLCKSZ;
-		}
-		else if (P_ISLEAF(opaque))
-		{
-			pgstat_index_page(stat, page, P_FIRSTDATAKEY(opaque),
-							  PageGetMaxOffsetNumber(page));
-		}
-		else
-		{
-			/* root or node */
-		}
-	}
-
-	_bt_relbuf(rel, buf);
-}
-
-/*
- * pgstat_hash_page -- check tuples in a hash page
- */
-static void
-pgstat_hash_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
-{
-	Buffer		buf;
-	Page		page;
-
-	_hash_getlock(rel, blkno, HASH_SHARE);
-	buf = _hash_getbuf(rel, blkno, HASH_READ, 0);
-	page = BufferGetPage(buf);
-
-	if (PageGetSpecialSize(page) == MAXALIGN(sizeof(HashPageOpaqueData)))
-	{
-		HashPageOpaque opaque;
-
-		opaque = (HashPageOpaque) PageGetSpecialPointer(page);
-		switch (opaque->hasho_flag)
-		{
-			case LH_UNUSED_PAGE:
-				stat->free_space += BLCKSZ;
-				break;
-			case LH_BUCKET_PAGE:
-			case LH_OVERFLOW_PAGE:
-				pgstat_index_page(stat, page, FirstOffsetNumber,
-								  PageGetMaxOffsetNumber(page));
-				break;
-			case LH_BITMAP_PAGE:
-			case LH_META_PAGE:
-			default:
-				break;
-		}
-	}
-	else
-	{
-		/* maybe corrupted */
-	}
-
-	_hash_relbuf(rel, buf);
-	_hash_droplock(rel, blkno, HASH_SHARE);
-}
-
-/*
- * pgstat_gist_page -- check tuples in a gist page
- */
-static void
-pgstat_gist_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
-{
-	Buffer		buf;
-	Page		page;
-
-	buf = ReadBuffer(rel, blkno);
-	LockBuffer(buf, GIST_SHARE);
-	gistcheckpage(rel, buf);
-	page = BufferGetPage(buf);
-
-	if (GistPageIsLeaf(page))
-	{
-		pgstat_index_page(stat, page, FirstOffsetNumber,
-						  PageGetMaxOffsetNumber(page));
-	}
-	else
-	{
-		/* root or node */
-	}
-
-	UnlockReleaseBuffer(buf);
-}
-
-/*
- * pgstat_index -- returns live/dead tuples info in a generic index
- */
-static Datum
-pgstat_index(Relation rel, BlockNumber start, pgstat_page pagefn,
-			 FunctionCallInfo fcinfo)
-{
-	BlockNumber nblocks;
-	BlockNumber blkno;
-	pgstattuple_type stat = {0};
-
-	blkno = start;
-	for (;;)
-	{
-		/* Get the current relation length */
-		LockRelationForExtension(rel, ExclusiveLock);
-		nblocks = RelationGetNumberOfBlocks(rel);
-		UnlockRelationForExtension(rel, ExclusiveLock);
-
-		/* Quit if we've scanned the whole relation */
-		if (blkno >= nblocks)
-		{
-			stat.table_len = (uint64) nblocks *BLCKSZ;
-
-			break;
-		}
-
-		for (; blkno < nblocks; blkno++)
-		{
-			CHECK_FOR_INTERRUPTS();
-
-			pagefn(&stat, rel, blkno);
-		}
-	}
-
-	relation_close(rel, AccessShareLock);
-
-	return build_pgstattuple_type(&stat, fcinfo);
-}
-
-/*
- * pgstat_index_page -- for generic index page
- */
-static void
-pgstat_index_page(pgstattuple_type *stat, Page page,
-				  OffsetNumber minoff, OffsetNumber maxoff)
-{
-	OffsetNumber i;
-
-	stat->free_space += PageGetFreeSpace(page);
-
-	for (i = minoff; i <= maxoff; i = OffsetNumberNext(i))
-	{
-		ItemId		itemid = PageGetItemId(page, i);
-
-		if (ItemIdIsDead(itemid))
-		{
-			stat->dead_tuple_count++;
-			stat->dead_tuple_len += ItemIdGetLength(itemid);
-		}
-		else
-		{
-			stat->tuple_count++;
-			stat->tuple_len += ItemIdGetLength(itemid);
-		}
-	}
-}
diff --git a/contrib/pgstattuple/pgstattuple.control b/contrib/pgstattuple/pgstattuple.control
deleted file mode 100644
index 7b5129b..0000000
--- a/contrib/pgstattuple/pgstattuple.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pgstattuple extension
-comment = 'show tuple-level statistics'
-default_version = '1.0'
-module_pathname = '$libdir/pgstattuple'
-relocatable = true
diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml
index adf09ca..0d16084 100644
--- a/doc/src/sgml/contrib.sgml
+++ b/doc/src/sgml/contrib.sgml
@@ -89,7 +89,6 @@ CREATE EXTENSION <replaceable>module_name</> FROM unpackaged;
 
  &adminpack;
  &auth-delay;
- &auto-explain;
  &btree-gin;
  &btree-gist;
  &chkpass;
@@ -109,17 +108,11 @@ CREATE EXTENSION <replaceable>module_name</> FROM unpackaged;
  &lo;
  &ltree;
  &oid2name;
- &pageinspect;
  &passwordcheck;
  &pgarchivecleanup;
  &pgbench;
- &pgbuffercache;
  &pgcrypto;
- &pgfreespacemap;
- &pgrowlocks;
  &pgstandby;
- &pgstatstatements;
- &pgstattuple;
  &pgtestfsync;
  &pgtrgm;
  &pgupgrade;
diff --git a/doc/src/sgml/extensions.sgml b/doc/src/sgml/extensions.sgml
new file mode 100644
index 0000000..eea69c3
--- /dev/null
+++ b/doc/src/sgml/extensions.sgml
@@ -0,0 +1,77 @@
+<!-- doc/src/sgml/extensions.sgml -->
+
+<appendix id="extensions">
+ <title>Core Extensions</title>
+
+ <para>
+ 
+  It is difficult to manage all of the components to
+  <productname>PostgreSQL</productname> without making the database core
+  larger than it must be.  But many enhancements can be efficiently developed
+  using the facilites normally intended for adding external modules. 
+  This appendix contains information regarding core extensions that are
+  built and included with a standard installation of PostgreSQL.  These
+  core extensions supply useful features in areas such as database diagnostics
+  and performance monitoring.
+ </para>
+
+ <para>  
+  Some of these features could instead be made available as built-in functions.
+  Providing them as extension modules instead reduces the amount of code to be
+  maintained in the main database.  It also serves as an example of how
+  powerful the extension features described in <xref linkend="extend"> are.  It
+  is possible to write your own extensions of similar utility to those listed
+  here, and to use these as examples for doing so.
+ </para>
+
+ <para>
+  To make use of one of these extensions, you need to register the new SQL
+  objects in the database system.  This is done by executing a
+  <xref linkend="sql-createextension"> command.  In a fresh database,
+  you can simply do
+
+<programlisting>
+CREATE EXTENSION <replaceable>module_name</>;
+</programlisting>
+
+  This command must be run by a database superuser.  This registers the
+  new SQL objects in the current database only, so you need to run this
+  command in each database that you want
+  the module's facilities to be available in.  Alternatively, run it in
+  database <literal>template1</> so that the extension will be copied into
+  subsequently-created databases by default.
+ </para>
+
+ <para>
+  Many modules allow you to install their objects in a schema of your
+  choice.  To do that, add <literal>SCHEMA
+  <replaceable>schema_name</></literal> to the <command>CREATE EXTENSION</>
+  command.  By default, the objects will be placed in your current creation
+  target schema, typically <literal>public</>.
+ </para>
+
+ <para>
+  If your database was brought forward by dump and reload from a pre-9.1
+  version of <productname>PostgreSQL</>, and you had been using the pre-9.1
+  version of the module in it, you should instead do
+
+<programlisting>
+CREATE EXTENSION <replaceable>module_name</> FROM unpackaged;
+</programlisting>
+
+  This will update the pre-9.1 objects of the module into a proper
+  <firstterm>extension</> object.  Future updates to the module will be
+  managed by <xref linkend="sql-alterextension">.
+  For more information about extension updates, see
+  <xref linkend="extend-extensions">.
+ </para>
+
+ &auto-explain;
+ &pageinspect;
+ &pgbuffercache;
+ &pgfreespacemap;
+ &pgrowlocks;
+ &pgstatstatements;
+ &pgstattuple;
+
+</appendix>
diff --git a/doc/src/sgml/external-projects.sgml b/doc/src/sgml/external-projects.sgml
index ef516b4..8b12574 100644
--- a/doc/src/sgml/external-projects.sgml
+++ b/doc/src/sgml/external-projects.sgml
@@ -246,9 +246,10 @@
   <para>
    <productname>PostgreSQL</> is designed to be easily extensible. For
    this reason, extensions loaded into the database can function
-   just like features that are built in. The
+   just like features that are built in.  Some <xref linkend="extension">
+   are available in any installation.  The
    <filename>contrib/</> directory shipped with the source code
-   contains several extensions, which are described in
+   contains several optional extensions, which are described in
    <xref linkend="contrib">.  Other extensions are developed
    independently, like <application><ulink
    url="http://www.postgis.org/">PostGIS</ulink></>.  Even
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index ed39e0b..352c6f2 100644
--- a/doc/src/sgml/filelist.sgml
+++ b/doc/src/sgml/filelist.sgml
@@ -91,11 +91,20 @@
 <!ENTITY sources    SYSTEM "sources.sgml">
 <!ENTITY storage    SYSTEM "storage.sgml">
 
+<!-- core extensions -->
+<!ENTITY extensions      SYSTEM "extensions.sgml">
+<!ENTITY auto-explain    SYSTEM "auto-explain.sgml">
+<!ENTITY pageinspect     SYSTEM "pageinspect.sgml">
+<!ENTITY pgbuffercache   SYSTEM "pgbuffercache.sgml">
+<!ENTITY pgfreespacemap  SYSTEM "pgfreespacemap.sgml">
+<!ENTITY pgrowlocks      SYSTEM "pgrowlocks.sgml">
+<!ENTITY pgstatstatements SYSTEM "pgstatstatements.sgml">
+<!ENTITY pgstattuple     SYSTEM "pgstattuple.sgml">
+
 <!-- contrib information -->
 <!ENTITY contrib         SYSTEM "contrib.sgml">
 <!ENTITY adminpack       SYSTEM "adminpack.sgml">
 <!ENTITY auth-delay      SYSTEM "auth-delay.sgml">
-<!ENTITY auto-explain    SYSTEM "auto-explain.sgml">
 <!ENTITY btree-gin       SYSTEM "btree-gin.sgml">
 <!ENTITY btree-gist      SYSTEM "btree-gist.sgml">
 <!ENTITY chkpass         SYSTEM "chkpass.sgml">
@@ -115,17 +124,11 @@
 <!ENTITY lo              SYSTEM "lo.sgml">
 <!ENTITY ltree           SYSTEM "ltree.sgml">
 <!ENTITY oid2name        SYSTEM "oid2name.sgml">
-<!ENTITY pageinspect     SYSTEM "pageinspect.sgml">
 <!ENTITY passwordcheck   SYSTEM "passwordcheck.sgml">
 <!ENTITY pgbench         SYSTEM "pgbench.sgml">
 <!ENTITY pgarchivecleanup SYSTEM "pgarchivecleanup.sgml">
-<!ENTITY pgbuffercache   SYSTEM "pgbuffercache.sgml">
 <!ENTITY pgcrypto        SYSTEM "pgcrypto.sgml">
-<!ENTITY pgfreespacemap  SYSTEM "pgfreespacemap.sgml">
-<!ENTITY pgrowlocks      SYSTEM "pgrowlocks.sgml">
 <!ENTITY pgstandby       SYSTEM "pgstandby.sgml">
-<!ENTITY pgstatstatements SYSTEM "pgstatstatements.sgml">
-<!ENTITY pgstattuple     SYSTEM "pgstattuple.sgml">
 <!ENTITY pgtestfsync     SYSTEM "pgtestfsync.sgml">
 <!ENTITY pgtrgm          SYSTEM "pgtrgm.sgml">
 <!ENTITY pgupgrade       SYSTEM "pgupgrade.sgml">
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index ac1da22..1a90267 100644
--- a/doc/src/sgml/postgres.sgml
+++ b/doc/src/sgml/postgres.sgml
@@ -257,6 +257,7 @@
   &keywords;
   &features;
   &release;
+  &extensions;
   &contrib;
   &external-projects;
   &sourcerepo;
diff --git a/src/Makefile b/src/Makefile
index a046034..87d6e2c 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -24,6 +24,7 @@ SUBDIRS = \
 	bin \
 	pl \
 	makefiles \
+	extension \
 	test/regress
 
 # There are too many interdependencies between the subdirectories, so
diff --git a/src/extension/Makefile b/src/extension/Makefile
new file mode 100644
index 0000000..282f076
--- /dev/null
+++ b/src/extension/Makefile
@@ -0,0 +1,41 @@
+# $PostgreSQL: pgsql/src/extension/Makefile $
+
+subdir = src/extension
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+
+SUBDIRS = \
+		auto_explain    \
+		pageinspect \
+		pg_buffercache \
+		pgrowlocks  \
+		pg_stat_statements \
+		pgstattuple
+
+ifeq ($(with_openssl),yes)
+SUBDIRS += sslinfo
+endif
+
+ifeq ($(with_ossp_uuid),yes)
+SUBDIRS += uuid-ossp
+endif
+
+ifeq ($(with_libxml),yes)
+SUBDIRS += xml2
+endif
+
+# Missing:
+#		start-scripts	\ (does not have a makefile)
+
+
+all install installdirs uninstall distprep clean distclean maintainer-clean:
+	@for dir in $(SUBDIRS); do \
+		$(MAKE) -C $$dir $@ || exit; \
+	done
+
+# We'd like check operations to run all the subtests before failing.
+check installcheck:
+	@CHECKERR=0; for dir in $(SUBDIRS); do \
+		$(MAKE) -C $$dir $@ || CHECKERR=$$?; \
+	done; \
+	exit $$CHECKERR
diff --git a/src/extension/auto_explain/Makefile b/src/extension/auto_explain/Makefile
new file mode 100644
index 0000000..023e59a
--- /dev/null
+++ b/src/extension/auto_explain/Makefile
@@ -0,0 +1,16 @@
+# src/extension/auto_explain/Makefile
+
+MODULE_big = auto_explain
+OBJS = auto_explain.o
+MODULEDIR=extension
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/auto_explain
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/auto_explain/auto_explain.c b/src/extension/auto_explain/auto_explain.c
new file mode 100644
index 0000000..647f6d0
--- /dev/null
+++ b/src/extension/auto_explain/auto_explain.c
@@ -0,0 +1,304 @@
+/*-------------------------------------------------------------------------
+ *
+ * auto_explain.c
+ *
+ *
+ * Copyright (c) 2008-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/extension/auto_explain/auto_explain.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include "commands/explain.h"
+#include "executor/instrument.h"
+#include "utils/guc.h"
+
+PG_MODULE_MAGIC;
+
+/* GUC variables */
+static int	auto_explain_log_min_duration = -1; /* msec or -1 */
+static bool auto_explain_log_analyze = false;
+static bool auto_explain_log_verbose = false;
+static bool auto_explain_log_buffers = false;
+static int	auto_explain_log_format = EXPLAIN_FORMAT_TEXT;
+static bool auto_explain_log_nested_statements = false;
+
+static const struct config_enum_entry format_options[] = {
+	{"text", EXPLAIN_FORMAT_TEXT, false},
+	{"xml", EXPLAIN_FORMAT_XML, false},
+	{"json", EXPLAIN_FORMAT_JSON, false},
+	{"yaml", EXPLAIN_FORMAT_YAML, false},
+	{NULL, 0, false}
+};
+
+/* Current nesting depth of ExecutorRun calls */
+static int	nesting_level = 0;
+
+/* Saved hook values in case of unload */
+static ExecutorStart_hook_type prev_ExecutorStart = NULL;
+static ExecutorRun_hook_type prev_ExecutorRun = NULL;
+static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
+static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
+
+#define auto_explain_enabled() \
+	(auto_explain_log_min_duration >= 0 && \
+	 (nesting_level == 0 || auto_explain_log_nested_statements))
+
+void		_PG_init(void);
+void		_PG_fini(void);
+
+static void explain_ExecutorStart(QueryDesc *queryDesc, int eflags);
+static void explain_ExecutorRun(QueryDesc *queryDesc,
+					ScanDirection direction,
+					long count);
+static void explain_ExecutorFinish(QueryDesc *queryDesc);
+static void explain_ExecutorEnd(QueryDesc *queryDesc);
+
+
+/*
+ * Module load callback
+ */
+void
+_PG_init(void)
+{
+	/* Define custom GUC variables. */
+	DefineCustomIntVariable("auto_explain.log_min_duration",
+		 "Sets the minimum execution time above which plans will be logged.",
+						 "Zero prints all plans. -1 turns this feature off.",
+							&auto_explain_log_min_duration,
+							-1,
+							-1, INT_MAX / 1000,
+							PGC_SUSET,
+							GUC_UNIT_MS,
+							NULL,
+							NULL,
+							NULL);
+
+	DefineCustomBoolVariable("auto_explain.log_analyze",
+							 "Use EXPLAIN ANALYZE for plan logging.",
+							 NULL,
+							 &auto_explain_log_analyze,
+							 false,
+							 PGC_SUSET,
+							 0,
+							 NULL,
+							 NULL,
+							 NULL);
+
+	DefineCustomBoolVariable("auto_explain.log_verbose",
+							 "Use EXPLAIN VERBOSE for plan logging.",
+							 NULL,
+							 &auto_explain_log_verbose,
+							 false,
+							 PGC_SUSET,
+							 0,
+							 NULL,
+							 NULL,
+							 NULL);
+
+	DefineCustomBoolVariable("auto_explain.log_buffers",
+							 "Log buffers usage.",
+							 NULL,
+							 &auto_explain_log_buffers,
+							 false,
+							 PGC_SUSET,
+							 0,
+							 NULL,
+							 NULL,
+							 NULL);
+
+	DefineCustomEnumVariable("auto_explain.log_format",
+							 "EXPLAIN format to be used for plan logging.",
+							 NULL,
+							 &auto_explain_log_format,
+							 EXPLAIN_FORMAT_TEXT,
+							 format_options,
+							 PGC_SUSET,
+							 0,
+							 NULL,
+							 NULL,
+							 NULL);
+
+	DefineCustomBoolVariable("auto_explain.log_nested_statements",
+							 "Log nested statements.",
+							 NULL,
+							 &auto_explain_log_nested_statements,
+							 false,
+							 PGC_SUSET,
+							 0,
+							 NULL,
+							 NULL,
+							 NULL);
+
+	EmitWarningsOnPlaceholders("auto_explain");
+
+	/* Install hooks. */
+	prev_ExecutorStart = ExecutorStart_hook;
+	ExecutorStart_hook = explain_ExecutorStart;
+	prev_ExecutorRun = ExecutorRun_hook;
+	ExecutorRun_hook = explain_ExecutorRun;
+	prev_ExecutorFinish = ExecutorFinish_hook;
+	ExecutorFinish_hook = explain_ExecutorFinish;
+	prev_ExecutorEnd = ExecutorEnd_hook;
+	ExecutorEnd_hook = explain_ExecutorEnd;
+}
+
+/*
+ * Module unload callback
+ */
+void
+_PG_fini(void)
+{
+	/* Uninstall hooks. */
+	ExecutorStart_hook = prev_ExecutorStart;
+	ExecutorRun_hook = prev_ExecutorRun;
+	ExecutorFinish_hook = prev_ExecutorFinish;
+	ExecutorEnd_hook = prev_ExecutorEnd;
+}
+
+/*
+ * ExecutorStart hook: start up logging if needed
+ */
+static void
+explain_ExecutorStart(QueryDesc *queryDesc, int eflags)
+{
+	if (auto_explain_enabled())
+	{
+		/* Enable per-node instrumentation iff log_analyze is required. */
+		if (auto_explain_log_analyze && (eflags & EXEC_FLAG_EXPLAIN_ONLY) == 0)
+		{
+			queryDesc->instrument_options |= INSTRUMENT_TIMER;
+			if (auto_explain_log_buffers)
+				queryDesc->instrument_options |= INSTRUMENT_BUFFERS;
+		}
+	}
+
+	if (prev_ExecutorStart)
+		prev_ExecutorStart(queryDesc, eflags);
+	else
+		standard_ExecutorStart(queryDesc, eflags);
+
+	if (auto_explain_enabled())
+	{
+		/*
+		 * Set up to track total elapsed time in ExecutorRun.  Make sure the
+		 * space is allocated in the per-query context so it will go away at
+		 * ExecutorEnd.
+		 */
+		if (queryDesc->totaltime == NULL)
+		{
+			MemoryContext oldcxt;
+
+			oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
+			queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
+			MemoryContextSwitchTo(oldcxt);
+		}
+	}
+}
+
+/*
+ * ExecutorRun hook: all we need do is track nesting depth
+ */
+static void
+explain_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
+{
+	nesting_level++;
+	PG_TRY();
+	{
+		if (prev_ExecutorRun)
+			prev_ExecutorRun(queryDesc, direction, count);
+		else
+			standard_ExecutorRun(queryDesc, direction, count);
+		nesting_level--;
+	}
+	PG_CATCH();
+	{
+		nesting_level--;
+		PG_RE_THROW();
+	}
+	PG_END_TRY();
+}
+
+/*
+ * ExecutorFinish hook: all we need do is track nesting depth
+ */
+static void
+explain_ExecutorFinish(QueryDesc *queryDesc)
+{
+	nesting_level++;
+	PG_TRY();
+	{
+		if (prev_ExecutorFinish)
+			prev_ExecutorFinish(queryDesc);
+		else
+			standard_ExecutorFinish(queryDesc);
+		nesting_level--;
+	}
+	PG_CATCH();
+	{
+		nesting_level--;
+		PG_RE_THROW();
+	}
+	PG_END_TRY();
+}
+
+/*
+ * ExecutorEnd hook: log results if needed
+ */
+static void
+explain_ExecutorEnd(QueryDesc *queryDesc)
+{
+	if (queryDesc->totaltime && auto_explain_enabled())
+	{
+		double		msec;
+
+		/*
+		 * Make sure stats accumulation is done.  (Note: it's okay if several
+		 * levels of hook all do this.)
+		 */
+		InstrEndLoop(queryDesc->totaltime);
+
+		/* Log plan if duration is exceeded. */
+		msec = queryDesc->totaltime->total * 1000.0;
+		if (msec >= auto_explain_log_min_duration)
+		{
+			ExplainState es;
+
+			ExplainInitState(&es);
+			es.analyze = (queryDesc->instrument_options && auto_explain_log_analyze);
+			es.verbose = auto_explain_log_verbose;
+			es.buffers = (es.analyze && auto_explain_log_buffers);
+			es.format = auto_explain_log_format;
+
+			ExplainBeginOutput(&es);
+			ExplainQueryText(&es, queryDesc);
+			ExplainPrintPlan(&es, queryDesc);
+			ExplainEndOutput(&es);
+
+			/* Remove last line break */
+			if (es.str->len > 0 && es.str->data[es.str->len - 1] == '\n')
+				es.str->data[--es.str->len] = '\0';
+
+			/*
+			 * Note: we rely on the existing logging of context or
+			 * debug_query_string to identify just which statement is being
+			 * reported.  This isn't ideal but trying to do it here would
+			 * often result in duplication.
+			 */
+			ereport(LOG,
+					(errmsg("duration: %.3f ms  plan:\n%s",
+							msec, es.str->data),
+					 errhidestmt(true)));
+
+			pfree(es.str->data);
+		}
+	}
+
+	if (prev_ExecutorEnd)
+		prev_ExecutorEnd(queryDesc);
+	else
+		standard_ExecutorEnd(queryDesc);
+}
diff --git a/src/extension/extension-global.mk b/src/extension/extension-global.mk
new file mode 100644
index 0000000..cc7643b
--- /dev/null
+++ b/src/extension/extension-global.mk
@@ -0,0 +1,5 @@
+# $PostgreSQL: pgsql/extension/extension-global.mk,v 1.10 2005/09/27 17:43:31 tgl Exp $
+
+NO_PGXS = 1
+MODULEDIR=extension
+include $(top_srcdir)/src/makefiles/pgxs.mk
diff --git a/src/extension/pageinspect/Makefile b/src/extension/pageinspect/Makefile
new file mode 100644
index 0000000..c6940b8
--- /dev/null
+++ b/src/extension/pageinspect/Makefile
@@ -0,0 +1,19 @@
+# src/extension/pageinspect/Makefile
+
+MODULE_big	= pageinspect
+OBJS		= rawpage.o heapfuncs.o btreefuncs.o fsmfuncs.o
+MODULEDIR = extension
+
+EXTENSION = pageinspect
+DATA = pageinspect--1.0.sql pageinspect--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pageinspect
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pageinspect/btreefuncs.c b/src/extension/pageinspect/btreefuncs.c
new file mode 100644
index 0000000..e378560
--- /dev/null
+++ b/src/extension/pageinspect/btreefuncs.c
@@ -0,0 +1,502 @@
+/*
+ * src/extension/pageinspect/btreefuncs.c
+ *
+ *
+ * btreefuncs.c
+ *
+ * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
+ *
+ * Permission to use, copy, modify, and distribute this software and
+ * its documentation for any purpose, without fee, and without a
+ * written agreement is hereby granted, provided that the above
+ * copyright notice and this paragraph and the following two
+ * paragraphs appear in all copies.
+ *
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+ * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+ * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+ * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+ * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+ * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+ */
+
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "access/nbtree.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_type.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "storage/bufmgr.h"
+#include "utils/builtins.h"
+
+
+extern Datum bt_metap(PG_FUNCTION_ARGS);
+extern Datum bt_page_items(PG_FUNCTION_ARGS);
+extern Datum bt_page_stats(PG_FUNCTION_ARGS);
+
+PG_FUNCTION_INFO_V1(bt_metap);
+PG_FUNCTION_INFO_V1(bt_page_items);
+PG_FUNCTION_INFO_V1(bt_page_stats);
+
+#define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
+#define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
+
+#define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
+		if ( !(FirstOffsetNumber <= (offnum) && \
+						(offnum) <= PageGetMaxOffsetNumber(pg)) ) \
+			 elog(ERROR, "page offset number out of range"); }
+
+/* note: BlockNumber is unsigned, hence can't be negative */
+#define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
+		if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
+			 elog(ERROR, "block number out of range"); }
+
+/* ------------------------------------------------
+ * structure for single btree page statistics
+ * ------------------------------------------------
+ */
+typedef struct BTPageStat
+{
+	uint32		blkno;
+	uint32		live_items;
+	uint32		dead_items;
+	uint32		page_size;
+	uint32		max_avail;
+	uint32		free_size;
+	uint32		avg_item_size;
+	char		type;
+
+	/* opaque data */
+	BlockNumber btpo_prev;
+	BlockNumber btpo_next;
+	union
+	{
+		uint32		level;
+		TransactionId xact;
+	}			btpo;
+	uint16		btpo_flags;
+	BTCycleId	btpo_cycleid;
+} BTPageStat;
+
+
+/* -------------------------------------------------
+ * GetBTPageStatistics()
+ *
+ * Collect statistics of single b-tree page
+ * -------------------------------------------------
+ */
+static void
+GetBTPageStatistics(BlockNumber blkno, Buffer buffer, BTPageStat *stat)
+{
+	Page		page = BufferGetPage(buffer);
+	PageHeader	phdr = (PageHeader) page;
+	OffsetNumber maxoff = PageGetMaxOffsetNumber(page);
+	BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);
+	int			item_size = 0;
+	int			off;
+
+	stat->blkno = blkno;
+
+	stat->max_avail = BLCKSZ - (BLCKSZ - phdr->pd_special + SizeOfPageHeaderData);
+
+	stat->dead_items = stat->live_items = 0;
+
+	stat->page_size = PageGetPageSize(page);
+
+	/* page type (flags) */
+	if (P_ISDELETED(opaque))
+	{
+		stat->type = 'd';
+		stat->btpo.xact = opaque->btpo.xact;
+		return;
+	}
+	else if (P_IGNORE(opaque))
+		stat->type = 'e';
+	else if (P_ISLEAF(opaque))
+		stat->type = 'l';
+	else if (P_ISROOT(opaque))
+		stat->type = 'r';
+	else
+		stat->type = 'i';
+
+	/* btpage opaque data */
+	stat->btpo_prev = opaque->btpo_prev;
+	stat->btpo_next = opaque->btpo_next;
+	stat->btpo.level = opaque->btpo.level;
+	stat->btpo_flags = opaque->btpo_flags;
+	stat->btpo_cycleid = opaque->btpo_cycleid;
+
+	/* count live and dead tuples, and free space */
+	for (off = FirstOffsetNumber; off <= maxoff; off++)
+	{
+		IndexTuple	itup;
+
+		ItemId		id = PageGetItemId(page, off);
+
+		itup = (IndexTuple) PageGetItem(page, id);
+
+		item_size += IndexTupleSize(itup);
+
+		if (!ItemIdIsDead(id))
+			stat->live_items++;
+		else
+			stat->dead_items++;
+	}
+	stat->free_size = PageGetFreeSpace(page);
+
+	if ((stat->live_items + stat->dead_items) > 0)
+		stat->avg_item_size = item_size / (stat->live_items + stat->dead_items);
+	else
+		stat->avg_item_size = 0;
+}
+
+/* -----------------------------------------------
+ * bt_page()
+ *
+ * Usage: SELECT * FROM bt_page('t1_pkey', 1);
+ * -----------------------------------------------
+ */
+Datum
+bt_page_stats(PG_FUNCTION_ARGS)
+{
+	text	   *relname = PG_GETARG_TEXT_P(0);
+	uint32		blkno = PG_GETARG_UINT32(1);
+	Buffer		buffer;
+	Relation	rel;
+	RangeVar   *relrv;
+	Datum		result;
+	HeapTuple	tuple;
+	TupleDesc	tupleDesc;
+	int			j;
+	char	   *values[11];
+	BTPageStat	stat;
+
+	if (!superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 (errmsg("must be superuser to use pageinspect functions"))));
+
+	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+	rel = relation_openrv(relrv, AccessShareLock);
+
+	if (!IS_INDEX(rel) || !IS_BTREE(rel))
+		elog(ERROR, "relation \"%s\" is not a btree index",
+			 RelationGetRelationName(rel));
+
+	/*
+	 * Reject attempts to read non-local temporary relations; we would be
+	 * likely to get wrong data since we have no visibility into the owning
+	 * session's local buffers.
+	 */
+	if (RELATION_IS_OTHER_TEMP(rel))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("cannot access temporary tables of other sessions")));
+
+	if (blkno == 0)
+		elog(ERROR, "block 0 is a meta page");
+
+	CHECK_RELATION_BLOCK_RANGE(rel, blkno);
+
+	buffer = ReadBuffer(rel, blkno);
+
+	/* keep compiler quiet */
+	stat.btpo_prev = stat.btpo_next = InvalidBlockNumber;
+	stat.btpo_flags = stat.free_size = stat.avg_item_size = 0;
+
+	GetBTPageStatistics(blkno, buffer, &stat);
+
+	/* Build a tuple descriptor for our result type */
+	if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	j = 0;
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", stat.blkno);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%c", stat.type);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", stat.live_items);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", stat.dead_items);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", stat.avg_item_size);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", stat.page_size);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", stat.free_size);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", stat.btpo_prev);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", stat.btpo_next);
+	values[j] = palloc(32);
+	if (stat.type == 'd')
+		snprintf(values[j++], 32, "%d", stat.btpo.xact);
+	else
+		snprintf(values[j++], 32, "%d", stat.btpo.level);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", stat.btpo_flags);
+
+	tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
+								   values);
+
+	result = HeapTupleGetDatum(tuple);
+
+	ReleaseBuffer(buffer);
+
+	relation_close(rel, AccessShareLock);
+
+	PG_RETURN_DATUM(result);
+}
+
+/*-------------------------------------------------------
+ * bt_page_items()
+ *
+ * Get IndexTupleData set in a btree page
+ *
+ * Usage: SELECT * FROM bt_page_items('t1_pkey', 1);
+ *-------------------------------------------------------
+ */
+
+/*
+ * cross-call data structure for SRF
+ */
+struct user_args
+{
+	Page		page;
+	OffsetNumber offset;
+};
+
+Datum
+bt_page_items(PG_FUNCTION_ARGS)
+{
+	text	   *relname = PG_GETARG_TEXT_P(0);
+	uint32		blkno = PG_GETARG_UINT32(1);
+	Datum		result;
+	char	   *values[6];
+	HeapTuple	tuple;
+	FuncCallContext *fctx;
+	MemoryContext mctx;
+	struct user_args *uargs;
+
+	if (!superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 (errmsg("must be superuser to use pageinspect functions"))));
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		RangeVar   *relrv;
+		Relation	rel;
+		Buffer		buffer;
+		BTPageOpaque opaque;
+		TupleDesc	tupleDesc;
+
+		fctx = SRF_FIRSTCALL_INIT();
+
+		relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+		rel = relation_openrv(relrv, AccessShareLock);
+
+		if (!IS_INDEX(rel) || !IS_BTREE(rel))
+			elog(ERROR, "relation \"%s\" is not a btree index",
+				 RelationGetRelationName(rel));
+
+		/*
+		 * Reject attempts to read non-local temporary relations; we would be
+		 * likely to get wrong data since we have no visibility into the
+		 * owning session's local buffers.
+		 */
+		if (RELATION_IS_OTHER_TEMP(rel))
+			ereport(ERROR,
+					(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				errmsg("cannot access temporary tables of other sessions")));
+
+		if (blkno == 0)
+			elog(ERROR, "block 0 is a meta page");
+
+		CHECK_RELATION_BLOCK_RANGE(rel, blkno);
+
+		buffer = ReadBuffer(rel, blkno);
+
+		/*
+		 * We copy the page into local storage to avoid holding pin on the
+		 * buffer longer than we must, and possibly failing to release it at
+		 * all if the calling query doesn't fetch all rows.
+		 */
+		mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
+
+		uargs = palloc(sizeof(struct user_args));
+
+		uargs->page = palloc(BLCKSZ);
+		memcpy(uargs->page, BufferGetPage(buffer), BLCKSZ);
+
+		ReleaseBuffer(buffer);
+		relation_close(rel, AccessShareLock);
+
+		uargs->offset = FirstOffsetNumber;
+
+		opaque = (BTPageOpaque) PageGetSpecialPointer(uargs->page);
+
+		if (P_ISDELETED(opaque))
+			elog(NOTICE, "page is deleted");
+
+		fctx->max_calls = PageGetMaxOffsetNumber(uargs->page);
+
+		/* Build a tuple descriptor for our result type */
+		if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+			elog(ERROR, "return type must be a row type");
+
+		fctx->attinmeta = TupleDescGetAttInMetadata(tupleDesc);
+
+		fctx->user_fctx = uargs;
+
+		MemoryContextSwitchTo(mctx);
+	}
+
+	fctx = SRF_PERCALL_SETUP();
+	uargs = fctx->user_fctx;
+
+	if (fctx->call_cntr < fctx->max_calls)
+	{
+		ItemId		id;
+		IndexTuple	itup;
+		int			j;
+		int			off;
+		int			dlen;
+		char	   *dump;
+		char	   *ptr;
+
+		id = PageGetItemId(uargs->page, uargs->offset);
+
+		if (!ItemIdIsValid(id))
+			elog(ERROR, "invalid ItemId");
+
+		itup = (IndexTuple) PageGetItem(uargs->page, id);
+
+		j = 0;
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, "%d", uargs->offset);
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, "(%u,%u)",
+				 BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
+				 itup->t_tid.ip_posid);
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, "%d", (int) IndexTupleSize(itup));
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, "%c", IndexTupleHasNulls(itup) ? 't' : 'f');
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, "%c", IndexTupleHasVarwidths(itup) ? 't' : 'f');
+
+		ptr = (char *) itup + IndexInfoFindDataOffset(itup->t_info);
+		dlen = IndexTupleSize(itup) - IndexInfoFindDataOffset(itup->t_info);
+		dump = palloc0(dlen * 3 + 1);
+		values[j] = dump;
+		for (off = 0; off < dlen; off++)
+		{
+			if (off > 0)
+				*dump++ = ' ';
+			sprintf(dump, "%02x", *(ptr + off) & 0xff);
+			dump += 2;
+		}
+
+		tuple = BuildTupleFromCStrings(fctx->attinmeta, values);
+		result = HeapTupleGetDatum(tuple);
+
+		uargs->offset = uargs->offset + 1;
+
+		SRF_RETURN_NEXT(fctx, result);
+	}
+	else
+	{
+		pfree(uargs->page);
+		pfree(uargs);
+		SRF_RETURN_DONE(fctx);
+	}
+}
+
+
+/* ------------------------------------------------
+ * bt_metap()
+ *
+ * Get a btree's meta-page information
+ *
+ * Usage: SELECT * FROM bt_metap('t1_pkey')
+ * ------------------------------------------------
+ */
+Datum
+bt_metap(PG_FUNCTION_ARGS)
+{
+	text	   *relname = PG_GETARG_TEXT_P(0);
+	Datum		result;
+	Relation	rel;
+	RangeVar   *relrv;
+	BTMetaPageData *metad;
+	TupleDesc	tupleDesc;
+	int			j;
+	char	   *values[6];
+	Buffer		buffer;
+	Page		page;
+	HeapTuple	tuple;
+
+	if (!superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 (errmsg("must be superuser to use pageinspect functions"))));
+
+	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+	rel = relation_openrv(relrv, AccessShareLock);
+
+	if (!IS_INDEX(rel) || !IS_BTREE(rel))
+		elog(ERROR, "relation \"%s\" is not a btree index",
+			 RelationGetRelationName(rel));
+
+	/*
+	 * Reject attempts to read non-local temporary relations; we would be
+	 * likely to get wrong data since we have no visibility into the owning
+	 * session's local buffers.
+	 */
+	if (RELATION_IS_OTHER_TEMP(rel))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("cannot access temporary tables of other sessions")));
+
+	buffer = ReadBuffer(rel, 0);
+	page = BufferGetPage(buffer);
+	metad = BTPageGetMeta(page);
+
+	/* Build a tuple descriptor for our result type */
+	if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	j = 0;
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", metad->btm_magic);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", metad->btm_version);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", metad->btm_root);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", metad->btm_level);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", metad->btm_fastroot);
+	values[j] = palloc(32);
+	snprintf(values[j++], 32, "%d", metad->btm_fastlevel);
+
+	tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
+								   values);
+
+	result = HeapTupleGetDatum(tuple);
+
+	ReleaseBuffer(buffer);
+
+	relation_close(rel, AccessShareLock);
+
+	PG_RETURN_DATUM(result);
+}
diff --git a/src/extension/pageinspect/fsmfuncs.c b/src/extension/pageinspect/fsmfuncs.c
new file mode 100644
index 0000000..45b2b9d
--- /dev/null
+++ b/src/extension/pageinspect/fsmfuncs.c
@@ -0,0 +1,59 @@
+/*-------------------------------------------------------------------------
+ *
+ * fsmfuncs.c
+ *	  Functions to investigate FSM pages
+ *
+ * These functions are restricted to superusers for the fear of introducing
+ * security holes if the input checking isn't as water-tight as it should.
+ * You'd need to be superuser to obtain a raw page image anyway, so
+ * there's hardly any use case for using these without superuser-rights
+ * anyway.
+ *
+ * Copyright (c) 2007-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/extension/pageinspect/fsmfuncs.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+#include "lib/stringinfo.h"
+#include "storage/fsm_internals.h"
+#include "utils/builtins.h"
+#include "miscadmin.h"
+#include "funcapi.h"
+
+Datum		fsm_page_contents(PG_FUNCTION_ARGS);
+
+/*
+ * Dumps the contents of a FSM page.
+ */
+PG_FUNCTION_INFO_V1(fsm_page_contents);
+
+Datum
+fsm_page_contents(PG_FUNCTION_ARGS)
+{
+	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
+	StringInfoData sinfo;
+	FSMPage		fsmpage;
+	int			i;
+
+	if (!superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 (errmsg("must be superuser to use raw page functions"))));
+
+	fsmpage = (FSMPage) PageGetContents(VARDATA(raw_page));
+
+	initStringInfo(&sinfo);
+
+	for (i = 0; i < NodesPerPage; i++)
+	{
+		if (fsmpage->fp_nodes[i] != 0)
+			appendStringInfo(&sinfo, "%d: %d\n", i, fsmpage->fp_nodes[i]);
+	}
+	appendStringInfo(&sinfo, "fp_next_slot: %d\n", fsmpage->fp_next_slot);
+
+	PG_RETURN_TEXT_P(cstring_to_text(sinfo.data));
+}
diff --git a/src/extension/pageinspect/heapfuncs.c b/src/extension/pageinspect/heapfuncs.c
new file mode 100644
index 0000000..a9f95b4
--- /dev/null
+++ b/src/extension/pageinspect/heapfuncs.c
@@ -0,0 +1,230 @@
+/*-------------------------------------------------------------------------
+ *
+ * heapfuncs.c
+ *	  Functions to investigate heap pages
+ *
+ * We check the input to these functions for corrupt pointers etc. that
+ * might cause crashes, but at the same time we try to print out as much
+ * information as possible, even if it's nonsense. That's because if a
+ * page is corrupt, we don't know why and how exactly it is corrupt, so we
+ * let the user judge it.
+ *
+ * These functions are restricted to superusers for the fear of introducing
+ * security holes if the input checking isn't as water-tight as it should be.
+ * You'd need to be superuser to obtain a raw page image anyway, so
+ * there's hardly any use case for using these without superuser-rights
+ * anyway.
+ *
+ * Copyright (c) 2007-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/extension/pageinspect/heapfuncs.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "funcapi.h"
+#include "access/heapam.h"
+#include "access/transam.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_type.h"
+#include "utils/builtins.h"
+#include "miscadmin.h"
+
+Datum		heap_page_items(PG_FUNCTION_ARGS);
+
+
+/*
+ * bits_to_text
+ *
+ * Converts a bits8-array of 'len' bits to a human-readable
+ * c-string representation.
+ */
+static char *
+bits_to_text(bits8 *bits, int len)
+{
+	int			i;
+	char	   *str;
+
+	str = palloc(len + 1);
+
+	for (i = 0; i < len; i++)
+		str[i] = (bits[(i / 8)] & (1 << (i % 8))) ? '1' : '0';
+
+	str[i] = '\0';
+
+	return str;
+}
+
+
+/*
+ * heap_page_items
+ *
+ * Allows inspection of line pointers and tuple headers of a heap page.
+ */
+PG_FUNCTION_INFO_V1(heap_page_items);
+
+typedef struct heap_page_items_state
+{
+	TupleDesc	tupd;
+	Page		page;
+	uint16		offset;
+} heap_page_items_state;
+
+Datum
+heap_page_items(PG_FUNCTION_ARGS)
+{
+	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
+	heap_page_items_state *inter_call_data = NULL;
+	FuncCallContext *fctx;
+	int			raw_page_size;
+
+	if (!superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 (errmsg("must be superuser to use raw page functions"))));
+
+	raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		TupleDesc	tupdesc;
+		MemoryContext mctx;
+
+		if (raw_page_size < SizeOfPageHeaderData)
+			ereport(ERROR,
+					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				  errmsg("input page too small (%d bytes)", raw_page_size)));
+
+		fctx = SRF_FIRSTCALL_INIT();
+		mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
+
+		inter_call_data = palloc(sizeof(heap_page_items_state));
+
+		/* Build a tuple descriptor for our result type */
+		if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+			elog(ERROR, "return type must be a row type");
+
+		inter_call_data->tupd = tupdesc;
+
+		inter_call_data->offset = FirstOffsetNumber;
+		inter_call_data->page = VARDATA(raw_page);
+
+		fctx->max_calls = PageGetMaxOffsetNumber(inter_call_data->page);
+		fctx->user_fctx = inter_call_data;
+
+		MemoryContextSwitchTo(mctx);
+	}
+
+	fctx = SRF_PERCALL_SETUP();
+	inter_call_data = fctx->user_fctx;
+
+	if (fctx->call_cntr < fctx->max_calls)
+	{
+		Page		page = inter_call_data->page;
+		HeapTuple	resultTuple;
+		Datum		result;
+		ItemId		id;
+		Datum		values[13];
+		bool		nulls[13];
+		uint16		lp_offset;
+		uint16		lp_flags;
+		uint16		lp_len;
+
+		memset(nulls, 0, sizeof(nulls));
+
+		/* Extract information from the line pointer */
+
+		id = PageGetItemId(page, inter_call_data->offset);
+
+		lp_offset = ItemIdGetOffset(id);
+		lp_flags = ItemIdGetFlags(id);
+		lp_len = ItemIdGetLength(id);
+
+		values[0] = UInt16GetDatum(inter_call_data->offset);
+		values[1] = UInt16GetDatum(lp_offset);
+		values[2] = UInt16GetDatum(lp_flags);
+		values[3] = UInt16GetDatum(lp_len);
+
+		/*
+		 * We do just enough validity checking to make sure we don't reference
+		 * data outside the page passed to us. The page could be corrupt in
+		 * many other ways, but at least we won't crash.
+		 */
+		if (ItemIdHasStorage(id) &&
+			lp_len >= sizeof(HeapTupleHeader) &&
+			lp_offset == MAXALIGN(lp_offset) &&
+			lp_offset + lp_len <= raw_page_size)
+		{
+			HeapTupleHeader tuphdr;
+			int			bits_len;
+
+			/* Extract information from the tuple header */
+
+			tuphdr = (HeapTupleHeader) PageGetItem(page, id);
+
+			values[4] = UInt32GetDatum(HeapTupleHeaderGetXmin(tuphdr));
+			values[5] = UInt32GetDatum(HeapTupleHeaderGetXmax(tuphdr));
+			values[6] = UInt32GetDatum(HeapTupleHeaderGetRawCommandId(tuphdr)); /* shared with xvac */
+			values[7] = PointerGetDatum(&tuphdr->t_ctid);
+			values[8] = UInt32GetDatum(tuphdr->t_infomask2);
+			values[9] = UInt32GetDatum(tuphdr->t_infomask);
+			values[10] = UInt8GetDatum(tuphdr->t_hoff);
+
+			/*
+			 * We already checked that the item as is completely within the
+			 * raw page passed to us, with the length given in the line
+			 * pointer.. Let's check that t_hoff doesn't point over lp_len,
+			 * before using it to access t_bits and oid.
+			 */
+			if (tuphdr->t_hoff >= sizeof(HeapTupleHeader) &&
+				tuphdr->t_hoff <= lp_len)
+			{
+				if (tuphdr->t_infomask & HEAP_HASNULL)
+				{
+					bits_len = tuphdr->t_hoff -
+						(((char *) tuphdr->t_bits) -((char *) tuphdr));
+
+					values[11] = CStringGetTextDatum(
+								 bits_to_text(tuphdr->t_bits, bits_len * 8));
+				}
+				else
+					nulls[11] = true;
+
+				if (tuphdr->t_infomask & HEAP_HASOID)
+					values[12] = HeapTupleHeaderGetOid(tuphdr);
+				else
+					nulls[12] = true;
+			}
+			else
+			{
+				nulls[11] = true;
+				nulls[12] = true;
+			}
+		}
+		else
+		{
+			/*
+			 * The line pointer is not used, or it's invalid. Set the rest of
+			 * the fields to NULL
+			 */
+			int			i;
+
+			for (i = 4; i <= 12; i++)
+				nulls[i] = true;
+		}
+
+		/* Build and return the result tuple. */
+		resultTuple = heap_form_tuple(inter_call_data->tupd, values, nulls);
+		result = HeapTupleGetDatum(resultTuple);
+
+		inter_call_data->offset++;
+
+		SRF_RETURN_NEXT(fctx, result);
+	}
+	else
+		SRF_RETURN_DONE(fctx);
+}
diff --git a/src/extension/pageinspect/pageinspect--1.0.sql b/src/extension/pageinspect/pageinspect--1.0.sql
new file mode 100644
index 0000000..cadcb4a
--- /dev/null
+++ b/src/extension/pageinspect/pageinspect--1.0.sql
@@ -0,0 +1,104 @@
+/* src/extension/pageinspect/pageinspect--1.0.sql */
+
+--
+-- get_raw_page()
+--
+CREATE FUNCTION get_raw_page(text, int4)
+RETURNS bytea
+AS 'MODULE_PATHNAME', 'get_raw_page'
+LANGUAGE C STRICT;
+
+CREATE FUNCTION get_raw_page(text, text, int4)
+RETURNS bytea
+AS 'MODULE_PATHNAME', 'get_raw_page_fork'
+LANGUAGE C STRICT;
+
+--
+-- page_header()
+--
+CREATE FUNCTION page_header(IN page bytea,
+    OUT lsn text,
+    OUT tli smallint,
+    OUT flags smallint,
+    OUT lower smallint,
+    OUT upper smallint,
+    OUT special smallint,
+    OUT pagesize smallint,
+    OUT version smallint,
+    OUT prune_xid xid)
+AS 'MODULE_PATHNAME', 'page_header'
+LANGUAGE C STRICT;
+
+--
+-- heap_page_items()
+--
+CREATE FUNCTION heap_page_items(IN page bytea,
+    OUT lp smallint,
+    OUT lp_off smallint,
+    OUT lp_flags smallint,
+    OUT lp_len smallint,
+    OUT t_xmin xid,
+    OUT t_xmax xid,
+    OUT t_field3 int4,
+    OUT t_ctid tid,
+    OUT t_infomask2 integer,
+    OUT t_infomask integer,
+    OUT t_hoff smallint,
+    OUT t_bits text,
+    OUT t_oid oid)
+RETURNS SETOF record
+AS 'MODULE_PATHNAME', 'heap_page_items'
+LANGUAGE C STRICT;
+
+--
+-- bt_metap()
+--
+CREATE FUNCTION bt_metap(IN relname text,
+    OUT magic int4,
+    OUT version int4,
+    OUT root int4,
+    OUT level int4,
+    OUT fastroot int4,
+    OUT fastlevel int4)
+AS 'MODULE_PATHNAME', 'bt_metap'
+LANGUAGE C STRICT;
+
+--
+-- bt_page_stats()
+--
+CREATE FUNCTION bt_page_stats(IN relname text, IN blkno int4,
+    OUT blkno int4,
+    OUT type "char",
+    OUT live_items int4,
+    OUT dead_items int4,
+    OUT avg_item_size int4,
+    OUT page_size int4,
+    OUT free_size int4,
+    OUT btpo_prev int4,
+    OUT btpo_next int4,
+    OUT btpo int4,
+    OUT btpo_flags int4)
+AS 'MODULE_PATHNAME', 'bt_page_stats'
+LANGUAGE C STRICT;
+
+--
+-- bt_page_items()
+--
+CREATE FUNCTION bt_page_items(IN relname text, IN blkno int4,
+    OUT itemoffset smallint,
+    OUT ctid tid,
+    OUT itemlen smallint,
+    OUT nulls bool,
+    OUT vars bool,
+    OUT data text)
+RETURNS SETOF record
+AS 'MODULE_PATHNAME', 'bt_page_items'
+LANGUAGE C STRICT;
+
+--
+-- fsm_page_contents()
+--
+CREATE FUNCTION fsm_page_contents(IN page bytea)
+RETURNS text
+AS 'MODULE_PATHNAME', 'fsm_page_contents'
+LANGUAGE C STRICT;
diff --git a/src/extension/pageinspect/pageinspect--unpackaged--1.0.sql b/src/extension/pageinspect/pageinspect--unpackaged--1.0.sql
new file mode 100644
index 0000000..9e9d8cf
--- /dev/null
+++ b/src/extension/pageinspect/pageinspect--unpackaged--1.0.sql
@@ -0,0 +1,28 @@
+/* src/extension/pageinspect/pageinspect--unpackaged--1.0.sql */
+
+DROP FUNCTION heap_page_items(bytea);
+CREATE FUNCTION heap_page_items(IN page bytea,
+	OUT lp smallint,
+	OUT lp_off smallint,
+	OUT lp_flags smallint,
+	OUT lp_len smallint,
+	OUT t_xmin xid,
+	OUT t_xmax xid,
+	OUT t_field3 int4,
+	OUT t_ctid tid,
+	OUT t_infomask2 integer,
+	OUT t_infomask integer,
+	OUT t_hoff smallint,
+	OUT t_bits text,
+	OUT t_oid oid)
+RETURNS SETOF record
+AS 'MODULE_PATHNAME', 'heap_page_items'
+LANGUAGE C STRICT;
+
+ALTER EXTENSION pageinspect ADD function get_raw_page(text,integer);
+ALTER EXTENSION pageinspect ADD function get_raw_page(text,text,integer);
+ALTER EXTENSION pageinspect ADD function page_header(bytea);
+ALTER EXTENSION pageinspect ADD function bt_metap(text);
+ALTER EXTENSION pageinspect ADD function bt_page_stats(text,integer);
+ALTER EXTENSION pageinspect ADD function bt_page_items(text,integer);
+ALTER EXTENSION pageinspect ADD function fsm_page_contents(bytea);
diff --git a/src/extension/pageinspect/pageinspect.control b/src/extension/pageinspect/pageinspect.control
new file mode 100644
index 0000000..f9da0e8
--- /dev/null
+++ b/src/extension/pageinspect/pageinspect.control
@@ -0,0 +1,5 @@
+# pageinspect extension
+comment = 'inspect the contents of database pages at a low level'
+default_version = '1.0'
+module_pathname = '$libdir/pageinspect'
+relocatable = true
diff --git a/src/extension/pageinspect/rawpage.c b/src/extension/pageinspect/rawpage.c
new file mode 100644
index 0000000..87a029f
--- /dev/null
+++ b/src/extension/pageinspect/rawpage.c
@@ -0,0 +1,232 @@
+/*-------------------------------------------------------------------------
+ *
+ * rawpage.c
+ *	  Functions to extract a raw page as bytea and inspect it
+ *
+ * Access-method specific inspection functions are in separate files.
+ *
+ * Copyright (c) 2007-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/extension/pageinspect/rawpage.c
+ *
+ *-------------------------------------------------------------------------
+ */
+
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "access/transam.h"
+#include "catalog/catalog.h"
+#include "catalog/namespace.h"
+#include "catalog/pg_type.h"
+#include "fmgr.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "storage/bufmgr.h"
+#include "utils/builtins.h"
+
+PG_MODULE_MAGIC;
+
+Datum		get_raw_page(PG_FUNCTION_ARGS);
+Datum		get_raw_page_fork(PG_FUNCTION_ARGS);
+Datum		page_header(PG_FUNCTION_ARGS);
+
+static bytea *get_raw_page_internal(text *relname, ForkNumber forknum,
+					  BlockNumber blkno);
+
+
+/*
+ * get_raw_page
+ *
+ * Returns a copy of a page from shared buffers as a bytea
+ */
+PG_FUNCTION_INFO_V1(get_raw_page);
+
+Datum
+get_raw_page(PG_FUNCTION_ARGS)
+{
+	text	   *relname = PG_GETARG_TEXT_P(0);
+	uint32		blkno = PG_GETARG_UINT32(1);
+	bytea	   *raw_page;
+
+	/*
+	 * We don't normally bother to check the number of arguments to a C
+	 * function, but here it's needed for safety because early 8.4 beta
+	 * releases mistakenly redefined get_raw_page() as taking three arguments.
+	 */
+	if (PG_NARGS() != 2)
+		ereport(ERROR,
+				(errmsg("wrong number of arguments to get_raw_page()"),
+				 errhint("Run the updated pageinspect.sql script.")));
+
+	raw_page = get_raw_page_internal(relname, MAIN_FORKNUM, blkno);
+
+	PG_RETURN_BYTEA_P(raw_page);
+}
+
+/*
+ * get_raw_page_fork
+ *
+ * Same, for any fork
+ */
+PG_FUNCTION_INFO_V1(get_raw_page_fork);
+
+Datum
+get_raw_page_fork(PG_FUNCTION_ARGS)
+{
+	text	   *relname = PG_GETARG_TEXT_P(0);
+	text	   *forkname = PG_GETARG_TEXT_P(1);
+	uint32		blkno = PG_GETARG_UINT32(2);
+	bytea	   *raw_page;
+	ForkNumber	forknum;
+
+	forknum = forkname_to_number(text_to_cstring(forkname));
+
+	raw_page = get_raw_page_internal(relname, forknum, blkno);
+
+	PG_RETURN_BYTEA_P(raw_page);
+}
+
+/*
+ * workhorse
+ */
+static bytea *
+get_raw_page_internal(text *relname, ForkNumber forknum, BlockNumber blkno)
+{
+	bytea	   *raw_page;
+	RangeVar   *relrv;
+	Relation	rel;
+	char	   *raw_page_data;
+	Buffer		buf;
+
+	if (!superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 (errmsg("must be superuser to use raw functions"))));
+
+	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+	rel = relation_openrv(relrv, AccessShareLock);
+
+	/* Check that this relation has storage */
+	if (rel->rd_rel->relkind == RELKIND_VIEW)
+		ereport(ERROR,
+				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+				 errmsg("cannot get raw page from view \"%s\"",
+						RelationGetRelationName(rel))));
+	if (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
+		ereport(ERROR,
+				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+				 errmsg("cannot get raw page from composite type \"%s\"",
+						RelationGetRelationName(rel))));
+	if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
+		ereport(ERROR,
+				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+				 errmsg("cannot get raw page from foreign table \"%s\"",
+						RelationGetRelationName(rel))));
+
+	/*
+	 * Reject attempts to read non-local temporary relations; we would be
+	 * likely to get wrong data since we have no visibility into the owning
+	 * session's local buffers.
+	 */
+	if (RELATION_IS_OTHER_TEMP(rel))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("cannot access temporary tables of other sessions")));
+
+	if (blkno >= RelationGetNumberOfBlocks(rel))
+		elog(ERROR, "block number %u is out of range for relation \"%s\"",
+			 blkno, RelationGetRelationName(rel));
+
+	/* Initialize buffer to copy to */
+	raw_page = (bytea *) palloc(BLCKSZ + VARHDRSZ);
+	SET_VARSIZE(raw_page, BLCKSZ + VARHDRSZ);
+	raw_page_data = VARDATA(raw_page);
+
+	/* Take a verbatim copy of the page */
+
+	buf = ReadBufferExtended(rel, forknum, blkno, RBM_NORMAL, NULL);
+	LockBuffer(buf, BUFFER_LOCK_SHARE);
+
+	memcpy(raw_page_data, BufferGetPage(buf), BLCKSZ);
+
+	LockBuffer(buf, BUFFER_LOCK_UNLOCK);
+	ReleaseBuffer(buf);
+
+	relation_close(rel, AccessShareLock);
+
+	return raw_page;
+}
+
+/*
+ * page_header
+ *
+ * Allows inspection of page header fields of a raw page
+ */
+
+PG_FUNCTION_INFO_V1(page_header);
+
+Datum
+page_header(PG_FUNCTION_ARGS)
+{
+	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
+	int			raw_page_size;
+
+	TupleDesc	tupdesc;
+
+	Datum		result;
+	HeapTuple	tuple;
+	Datum		values[9];
+	bool		nulls[9];
+
+	PageHeader	page;
+	XLogRecPtr	lsn;
+	char		lsnchar[64];
+
+	if (!superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 (errmsg("must be superuser to use raw page functions"))));
+
+	raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
+
+	/*
+	 * Check that enough data was supplied, so that we don't try to access
+	 * fields outside the supplied buffer.
+	 */
+	if (raw_page_size < sizeof(PageHeaderData))
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("input page too small (%d bytes)", raw_page_size)));
+
+	page = (PageHeader) VARDATA(raw_page);
+
+	/* Build a tuple descriptor for our result type */
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/* Extract information from the page header */
+
+	lsn = PageGetLSN(page);
+	snprintf(lsnchar, sizeof(lsnchar), "%X/%X", lsn.xlogid, lsn.xrecoff);
+
+	values[0] = CStringGetTextDatum(lsnchar);
+	values[1] = UInt16GetDatum(PageGetTLI(page));
+	values[2] = UInt16GetDatum(page->pd_flags);
+	values[3] = UInt16GetDatum(page->pd_lower);
+	values[4] = UInt16GetDatum(page->pd_upper);
+	values[5] = UInt16GetDatum(page->pd_special);
+	values[6] = UInt16GetDatum(PageGetPageSize(page));
+	values[7] = UInt16GetDatum(PageGetPageLayoutVersion(page));
+	values[8] = TransactionIdGetDatum(page->pd_prune_xid);
+
+	/* Build and return the tuple. */
+
+	memset(nulls, 0, sizeof(nulls));
+
+	tuple = heap_form_tuple(tupdesc, values, nulls);
+	result = HeapTupleGetDatum(tuple);
+
+	PG_RETURN_DATUM(result);
+}
diff --git a/src/extension/pg_buffercache/Makefile b/src/extension/pg_buffercache/Makefile
new file mode 100644
index 0000000..e361592
--- /dev/null
+++ b/src/extension/pg_buffercache/Makefile
@@ -0,0 +1,19 @@
+# src/extension/pg_buffercache/Makefile
+
+MODULE_big = pg_buffercache
+OBJS = pg_buffercache_pages.o
+MODULEDIR = extension
+
+EXTENSION = pg_buffercache
+DATA = pg_buffercache--1.0.sql pg_buffercache--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pg_buffercache
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pg_buffercache/pg_buffercache--1.0.sql b/src/extension/pg_buffercache/pg_buffercache--1.0.sql
new file mode 100644
index 0000000..ceca6ae
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache--1.0.sql
@@ -0,0 +1,17 @@
+/* src/extension/pg_buffercache/pg_buffercache--1.0.sql */
+
+-- Register the function.
+CREATE FUNCTION pg_buffercache_pages()
+RETURNS SETOF RECORD
+AS 'MODULE_PATHNAME', 'pg_buffercache_pages'
+LANGUAGE C;
+
+-- Create a view for convenient access.
+CREATE VIEW pg_buffercache AS
+	SELECT P.* FROM pg_buffercache_pages() AS P
+	(bufferid integer, relfilenode oid, reltablespace oid, reldatabase oid,
+	 relforknumber int2, relblocknumber int8, isdirty bool, usagecount int2);
+
+-- Don't want these to be available to public.
+REVOKE ALL ON FUNCTION pg_buffercache_pages() FROM PUBLIC;
+REVOKE ALL ON pg_buffercache FROM PUBLIC;
diff --git a/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql b/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
new file mode 100644
index 0000000..0cfa317
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
@@ -0,0 +1,4 @@
+/* src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql */
+
+ALTER EXTENSION pg_buffercache ADD function pg_buffercache_pages();
+ALTER EXTENSION pg_buffercache ADD view pg_buffercache;
diff --git a/src/extension/pg_buffercache/pg_buffercache.control b/src/extension/pg_buffercache/pg_buffercache.control
new file mode 100644
index 0000000..709513c
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache.control
@@ -0,0 +1,5 @@
+# pg_buffercache extension
+comment = 'examine the shared buffer cache'
+default_version = '1.0'
+module_pathname = '$libdir/pg_buffercache'
+relocatable = true
diff --git a/src/extension/pg_buffercache/pg_buffercache_pages.c b/src/extension/pg_buffercache/pg_buffercache_pages.c
new file mode 100644
index 0000000..a44610f
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache_pages.c
@@ -0,0 +1,219 @@
+/*-------------------------------------------------------------------------
+ *
+ * pg_buffercache_pages.c
+ *	  display some contents of the buffer cache
+ *
+ *	  src/extension/pg_buffercache/pg_buffercache_pages.c
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "catalog/pg_type.h"
+#include "funcapi.h"
+#include "storage/buf_internals.h"
+#include "storage/bufmgr.h"
+#include "utils/relcache.h"
+
+
+#define NUM_BUFFERCACHE_PAGES_ELEM	8
+
+PG_MODULE_MAGIC;
+
+Datum		pg_buffercache_pages(PG_FUNCTION_ARGS);
+
+
+/*
+ * Record structure holding the to be exposed cache data.
+ */
+typedef struct
+{
+	uint32		bufferid;
+	Oid			relfilenode;
+	Oid			reltablespace;
+	Oid			reldatabase;
+	ForkNumber	forknum;
+	BlockNumber blocknum;
+	bool		isvalid;
+	bool		isdirty;
+	uint16		usagecount;
+} BufferCachePagesRec;
+
+
+/*
+ * Function context for data persisting over repeated calls.
+ */
+typedef struct
+{
+	TupleDesc	tupdesc;
+	BufferCachePagesRec *record;
+} BufferCachePagesContext;
+
+
+/*
+ * Function returning data from the shared buffer cache - buffer number,
+ * relation node/tablespace/database/blocknum and dirty indicator.
+ */
+PG_FUNCTION_INFO_V1(pg_buffercache_pages);
+
+Datum
+pg_buffercache_pages(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	Datum		result;
+	MemoryContext oldcontext;
+	BufferCachePagesContext *fctx;		/* User function context. */
+	TupleDesc	tupledesc;
+	HeapTuple	tuple;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		int			i;
+		volatile BufferDesc *bufHdr;
+
+		funcctx = SRF_FIRSTCALL_INIT();
+
+		/* Switch context when allocating stuff to be used in later calls */
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		/* Create a user function context for cross-call persistence */
+		fctx = (BufferCachePagesContext *) palloc(sizeof(BufferCachePagesContext));
+
+		/* Construct a tuple descriptor for the result rows. */
+		tupledesc = CreateTemplateTupleDesc(NUM_BUFFERCACHE_PAGES_ELEM, false);
+		TupleDescInitEntry(tupledesc, (AttrNumber) 1, "bufferid",
+						   INT4OID, -1, 0);
+		TupleDescInitEntry(tupledesc, (AttrNumber) 2, "relfilenode",
+						   OIDOID, -1, 0);
+		TupleDescInitEntry(tupledesc, (AttrNumber) 3, "reltablespace",
+						   OIDOID, -1, 0);
+		TupleDescInitEntry(tupledesc, (AttrNumber) 4, "reldatabase",
+						   OIDOID, -1, 0);
+		TupleDescInitEntry(tupledesc, (AttrNumber) 5, "relforknumber",
+						   INT2OID, -1, 0);
+		TupleDescInitEntry(tupledesc, (AttrNumber) 6, "relblocknumber",
+						   INT8OID, -1, 0);
+		TupleDescInitEntry(tupledesc, (AttrNumber) 7, "isdirty",
+						   BOOLOID, -1, 0);
+		TupleDescInitEntry(tupledesc, (AttrNumber) 8, "usage_count",
+						   INT2OID, -1, 0);
+
+		fctx->tupdesc = BlessTupleDesc(tupledesc);
+
+		/* Allocate NBuffers worth of BufferCachePagesRec records. */
+		fctx->record = (BufferCachePagesRec *) palloc(sizeof(BufferCachePagesRec) * NBuffers);
+
+		/* Set max calls and remember the user function context. */
+		funcctx->max_calls = NBuffers;
+		funcctx->user_fctx = fctx;
+
+		/* Return to original context when allocating transient memory */
+		MemoryContextSwitchTo(oldcontext);
+
+		/*
+		 * To get a consistent picture of the buffer state, we must lock all
+		 * partitions of the buffer map.  Needless to say, this is horrible
+		 * for concurrency.  Must grab locks in increasing order to avoid
+		 * possible deadlocks.
+		 */
+		for (i = 0; i < NUM_BUFFER_PARTITIONS; i++)
+			LWLockAcquire(FirstBufMappingLock + i, LW_SHARED);
+
+		/*
+		 * Scan though all the buffers, saving the relevant fields in the
+		 * fctx->record structure.
+		 */
+		for (i = 0, bufHdr = BufferDescriptors; i < NBuffers; i++, bufHdr++)
+		{
+			/* Lock each buffer header before inspecting. */
+			LockBufHdr(bufHdr);
+
+			fctx->record[i].bufferid = BufferDescriptorGetBuffer(bufHdr);
+			fctx->record[i].relfilenode = bufHdr->tag.rnode.relNode;
+			fctx->record[i].reltablespace = bufHdr->tag.rnode.spcNode;
+			fctx->record[i].reldatabase = bufHdr->tag.rnode.dbNode;
+			fctx->record[i].forknum = bufHdr->tag.forkNum;
+			fctx->record[i].blocknum = bufHdr->tag.blockNum;
+			fctx->record[i].usagecount = bufHdr->usage_count;
+
+			if (bufHdr->flags & BM_DIRTY)
+				fctx->record[i].isdirty = true;
+			else
+				fctx->record[i].isdirty = false;
+
+			/* Note if the buffer is valid, and has storage created */
+			if ((bufHdr->flags & BM_VALID) && (bufHdr->flags & BM_TAG_VALID))
+				fctx->record[i].isvalid = true;
+			else
+				fctx->record[i].isvalid = false;
+
+			UnlockBufHdr(bufHdr);
+		}
+
+		/*
+		 * And release locks.  We do this in reverse order for two reasons:
+		 * (1) Anyone else who needs more than one of the locks will be trying
+		 * to lock them in increasing order; we don't want to release the
+		 * other process until it can get all the locks it needs. (2) This
+		 * avoids O(N^2) behavior inside LWLockRelease.
+		 */
+		for (i = NUM_BUFFER_PARTITIONS; --i >= 0;)
+			LWLockRelease(FirstBufMappingLock + i);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+
+	/* Get the saved state */
+	fctx = funcctx->user_fctx;
+
+	if (funcctx->call_cntr < funcctx->max_calls)
+	{
+		uint32		i = funcctx->call_cntr;
+		Datum		values[NUM_BUFFERCACHE_PAGES_ELEM];
+		bool		nulls[NUM_BUFFERCACHE_PAGES_ELEM];
+
+		values[0] = Int32GetDatum(fctx->record[i].bufferid);
+		nulls[0] = false;
+
+		/*
+		 * Set all fields except the bufferid to null if the buffer is unused
+		 * or not valid.
+		 */
+		if (fctx->record[i].blocknum == InvalidBlockNumber ||
+			fctx->record[i].isvalid == false)
+		{
+			nulls[1] = true;
+			nulls[2] = true;
+			nulls[3] = true;
+			nulls[4] = true;
+			nulls[5] = true;
+			nulls[6] = true;
+			nulls[7] = true;
+		}
+		else
+		{
+			values[1] = ObjectIdGetDatum(fctx->record[i].relfilenode);
+			nulls[1] = false;
+			values[2] = ObjectIdGetDatum(fctx->record[i].reltablespace);
+			nulls[2] = false;
+			values[3] = ObjectIdGetDatum(fctx->record[i].reldatabase);
+			nulls[3] = false;
+			values[4] = ObjectIdGetDatum(fctx->record[i].forknum);
+			nulls[4] = false;
+			values[5] = Int64GetDatum((int64) fctx->record[i].blocknum);
+			nulls[5] = false;
+			values[6] = BoolGetDatum(fctx->record[i].isdirty);
+			nulls[6] = false;
+			values[7] = Int16GetDatum(fctx->record[i].usagecount);
+			nulls[7] = false;
+		}
+
+		/* Build and return the tuple. */
+		tuple = heap_form_tuple(fctx->tupdesc, values, nulls);
+		result = HeapTupleGetDatum(tuple);
+
+		SRF_RETURN_NEXT(funcctx, result);
+	}
+	else
+		SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/extension/pg_freespacemap/Makefile b/src/extension/pg_freespacemap/Makefile
new file mode 100644
index 0000000..0ffe226
--- /dev/null
+++ b/src/extension/pg_freespacemap/Makefile
@@ -0,0 +1,19 @@
+# src/extensions/pg_freespacemap/Makefile
+
+MODULE_big = pg_freespacemap
+OBJS = pg_freespacemap.o
+MODULEDIR = extension
+
+EXTENSION = pg_freespacemap
+DATA = pg_freespacemap--1.0.sql pg_freespacemap--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pg_freespacemap
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pg_freespacemap/pg_freespacemap--1.0.sql b/src/extension/pg_freespacemap/pg_freespacemap--1.0.sql
new file mode 100644
index 0000000..8188786
--- /dev/null
+++ b/src/extension/pg_freespacemap/pg_freespacemap--1.0.sql
@@ -0,0 +1,22 @@
+/* src/extensions/pg_freespacemap/pg_freespacemap--1.0.sql */
+
+-- Register the C function.
+CREATE FUNCTION pg_freespace(regclass, bigint)
+RETURNS int2
+AS 'MODULE_PATHNAME', 'pg_freespace'
+LANGUAGE C STRICT;
+
+-- pg_freespace shows the recorded space avail at each block in a relation
+CREATE FUNCTION
+  pg_freespace(rel regclass, blkno OUT bigint, avail OUT int2)
+RETURNS SETOF RECORD
+AS $$
+  SELECT blkno, pg_freespace($1, blkno) AS avail
+  FROM generate_series(0, pg_relation_size($1) / current_setting('block_size')::bigint - 1) AS blkno;
+$$
+LANGUAGE SQL;
+
+
+-- Don't want these to be available to public.
+REVOKE ALL ON FUNCTION pg_freespace(regclass, bigint) FROM PUBLIC;
+REVOKE ALL ON FUNCTION pg_freespace(regclass) FROM PUBLIC;
diff --git a/src/extension/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql b/src/extension/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
new file mode 100644
index 0000000..d2231ef
--- /dev/null
+++ b/src/extension/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
@@ -0,0 +1,4 @@
+/* src/extensions/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql */
+
+ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass,bigint);
+ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass);
diff --git a/src/extension/pg_freespacemap/pg_freespacemap.c b/src/extension/pg_freespacemap/pg_freespacemap.c
new file mode 100644
index 0000000..501da04
--- /dev/null
+++ b/src/extension/pg_freespacemap/pg_freespacemap.c
@@ -0,0 +1,46 @@
+/*-------------------------------------------------------------------------
+ *
+ * pg_freespacemap.c
+ *	  display contents of a free space map
+ *
+ *	  src/extensions/pg_freespacemap/pg_freespacemap.c
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "funcapi.h"
+#include "storage/block.h"
+#include "storage/freespace.h"
+
+
+PG_MODULE_MAGIC;
+
+Datum		pg_freespace(PG_FUNCTION_ARGS);
+
+/*
+ * Returns the amount of free space on a given page, according to the
+ * free space map.
+ */
+PG_FUNCTION_INFO_V1(pg_freespace);
+
+Datum
+pg_freespace(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		blkno = PG_GETARG_INT64(1);
+	int16		freespace;
+	Relation	rel;
+
+	rel = relation_open(relid, AccessShareLock);
+
+	if (blkno < 0 || blkno > MaxBlockNumber)
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+				 errmsg("invalid block number")));
+
+	freespace = GetRecordedFreeSpace(rel, blkno);
+
+	relation_close(rel, AccessShareLock);
+	PG_RETURN_INT16(freespace);
+}
diff --git a/src/extension/pg_freespacemap/pg_freespacemap.control b/src/extension/pg_freespacemap/pg_freespacemap.control
new file mode 100644
index 0000000..34b695f
--- /dev/null
+++ b/src/extension/pg_freespacemap/pg_freespacemap.control
@@ -0,0 +1,5 @@
+# pg_freespacemap extension
+comment = 'examine the free space map (FSM)'
+default_version = '1.0'
+module_pathname = '$libdir/pg_freespacemap'
+relocatable = true
diff --git a/src/extension/pg_stat_statements/Makefile b/src/extension/pg_stat_statements/Makefile
new file mode 100644
index 0000000..9cf3f99
--- /dev/null
+++ b/src/extension/pg_stat_statements/Makefile
@@ -0,0 +1,19 @@
+# src/extension/pg_stat_statements/Makefile
+
+MODULE_big = pg_stat_statements
+OBJS = pg_stat_statements.o
+MODULEDIR = extension
+
+EXTENSION = pg_stat_statements
+DATA = pg_stat_statements--1.0.sql pg_stat_statements--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pg_stat_statements
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pg_stat_statements/pg_stat_statements--1.0.sql b/src/extension/pg_stat_statements/pg_stat_statements--1.0.sql
new file mode 100644
index 0000000..41145e7
--- /dev/null
+++ b/src/extension/pg_stat_statements/pg_stat_statements--1.0.sql
@@ -0,0 +1,36 @@
+/* src/extension/pg_stat_statements/pg_stat_statements--1.0.sql */
+
+-- Register functions.
+CREATE FUNCTION pg_stat_statements_reset()
+RETURNS void
+AS 'MODULE_PATHNAME'
+LANGUAGE C;
+
+CREATE FUNCTION pg_stat_statements(
+    OUT userid oid,
+    OUT dbid oid,
+    OUT query text,
+    OUT calls int8,
+    OUT total_time float8,
+    OUT rows int8,
+    OUT shared_blks_hit int8,
+    OUT shared_blks_read int8,
+    OUT shared_blks_written int8,
+    OUT local_blks_hit int8,
+    OUT local_blks_read int8,
+    OUT local_blks_written int8,
+    OUT temp_blks_read int8,
+    OUT temp_blks_written int8
+)
+RETURNS SETOF record
+AS 'MODULE_PATHNAME'
+LANGUAGE C;
+
+-- Register a view on the function for ease of use.
+CREATE VIEW pg_stat_statements AS
+  SELECT * FROM pg_stat_statements();
+
+GRANT SELECT ON pg_stat_statements TO PUBLIC;
+
+-- Don't want this to be available to non-superusers.
+REVOKE ALL ON FUNCTION pg_stat_statements_reset() FROM PUBLIC;
diff --git a/src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql b/src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
new file mode 100644
index 0000000..c8993b5
--- /dev/null
+++ b/src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
@@ -0,0 +1,5 @@
+/* src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql */
+
+ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements_reset();
+ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements();
+ALTER EXTENSION pg_stat_statements ADD view pg_stat_statements;
diff --git a/src/extension/pg_stat_statements/pg_stat_statements.c b/src/extension/pg_stat_statements/pg_stat_statements.c
new file mode 100644
index 0000000..4ecd445
--- /dev/null
+++ b/src/extension/pg_stat_statements/pg_stat_statements.c
@@ -0,0 +1,1046 @@
+/*-------------------------------------------------------------------------
+ *
+ * pg_stat_statements.c
+ *		Track statement execution times across a whole database cluster.
+ *
+ * Note about locking issues: to create or delete an entry in the shared
+ * hashtable, one must hold pgss->lock exclusively.  Modifying any field
+ * in an entry except the counters requires the same.  To look up an entry,
+ * one must hold the lock shared.  To read or update the counters within
+ * an entry, one must hold the lock shared or exclusive (so the entry doesn't
+ * disappear!) and also take the entry's mutex spinlock.
+ *
+ *
+ * Copyright (c) 2008-2011, PostgreSQL Global Development Group
+ *
+ * IDENTIFICATION
+ *	  src/test/pg_stat_statements/pg_stat_statements.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include <unistd.h>
+
+#include "access/hash.h"
+#include "catalog/pg_type.h"
+#include "executor/executor.h"
+#include "executor/instrument.h"
+#include "funcapi.h"
+#include "mb/pg_wchar.h"
+#include "miscadmin.h"
+#include "pgstat.h"
+#include "storage/fd.h"
+#include "storage/ipc.h"
+#include "storage/spin.h"
+#include "tcop/utility.h"
+#include "utils/builtins.h"
+#include "utils/hsearch.h"
+#include "utils/guc.h"
+
+
+PG_MODULE_MAGIC;
+
+/* Location of stats file */
+#define PGSS_DUMP_FILE	"global/pg_stat_statements.stat"
+
+/* This constant defines the magic number in the stats file header */
+static const uint32 PGSS_FILE_HEADER = 0x20100108;
+
+/* XXX: Should USAGE_EXEC reflect execution time and/or buffer usage? */
+#define USAGE_EXEC(duration)	(1.0)
+#define USAGE_INIT				(1.0)	/* including initial planning */
+#define USAGE_DECREASE_FACTOR	(0.99)	/* decreased every entry_dealloc */
+#define USAGE_DEALLOC_PERCENT	5		/* free this % of entries at once */
+
+/*
+ * Hashtable key that defines the identity of a hashtable entry.  The
+ * hash comparators do not assume that the query string is null-terminated;
+ * this lets us search for an mbcliplen'd string without copying it first.
+ *
+ * Presently, the query encoding is fully determined by the source database
+ * and so we don't really need it to be in the key.  But that might not always
+ * be true. Anyway it's notationally convenient to pass it as part of the key.
+ */
+typedef struct pgssHashKey
+{
+	Oid			userid;			/* user OID */
+	Oid			dbid;			/* database OID */
+	int			encoding;		/* query encoding */
+	int			query_len;		/* # of valid bytes in query string */
+	const char *query_ptr;		/* query string proper */
+} pgssHashKey;
+
+/*
+ * The actual stats counters kept within pgssEntry.
+ */
+typedef struct Counters
+{
+	int64		calls;			/* # of times executed */
+	double		total_time;		/* total execution time in seconds */
+	int64		rows;			/* total # of retrieved or affected rows */
+	int64		shared_blks_hit;	/* # of shared buffer hits */
+	int64		shared_blks_read;		/* # of shared disk blocks read */
+	int64		shared_blks_written;	/* # of shared disk blocks written */
+	int64		local_blks_hit; /* # of local buffer hits */
+	int64		local_blks_read;	/* # of local disk blocks read */
+	int64		local_blks_written;		/* # of local disk blocks written */
+	int64		temp_blks_read; /* # of temp blocks read */
+	int64		temp_blks_written;		/* # of temp blocks written */
+	double		usage;			/* usage factor */
+} Counters;
+
+/*
+ * Statistics per statement
+ *
+ * NB: see the file read/write code before changing field order here.
+ */
+typedef struct pgssEntry
+{
+	pgssHashKey key;			/* hash key of entry - MUST BE FIRST */
+	Counters	counters;		/* the statistics for this query */
+	slock_t		mutex;			/* protects the counters only */
+	char		query[1];		/* VARIABLE LENGTH ARRAY - MUST BE LAST */
+	/* Note: the allocated length of query[] is actually pgss->query_size */
+} pgssEntry;
+
+/*
+ * Global shared state
+ */
+typedef struct pgssSharedState
+{
+	LWLockId	lock;			/* protects hashtable search/modification */
+	int			query_size;		/* max query length in bytes */
+} pgssSharedState;
+
+/*---- Local variables ----*/
+
+/* Current nesting depth of ExecutorRun calls */
+static int	nested_level = 0;
+
+/* Saved hook values in case of unload */
+static shmem_startup_hook_type prev_shmem_startup_hook = NULL;
+static ExecutorStart_hook_type prev_ExecutorStart = NULL;
+static ExecutorRun_hook_type prev_ExecutorRun = NULL;
+static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
+static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
+static ProcessUtility_hook_type prev_ProcessUtility = NULL;
+
+/* Links to shared memory state */
+static pgssSharedState *pgss = NULL;
+static HTAB *pgss_hash = NULL;
+
+/*---- GUC variables ----*/
+
+typedef enum
+{
+	PGSS_TRACK_NONE,			/* track no statements */
+	PGSS_TRACK_TOP,				/* only top level statements */
+	PGSS_TRACK_ALL				/* all statements, including nested ones */
+}	PGSSTrackLevel;
+
+static const struct config_enum_entry track_options[] =
+{
+	{"none", PGSS_TRACK_NONE, false},
+	{"top", PGSS_TRACK_TOP, false},
+	{"all", PGSS_TRACK_ALL, false},
+	{NULL, 0, false}
+};
+
+static int	pgss_max;			/* max # statements to track */
+static int	pgss_track;			/* tracking level */
+static bool pgss_track_utility; /* whether to track utility commands */
+static bool pgss_save;			/* whether to save stats across shutdown */
+
+
+#define pgss_enabled() \
+	(pgss_track == PGSS_TRACK_ALL || \
+	(pgss_track == PGSS_TRACK_TOP && nested_level == 0))
+
+/*---- Function declarations ----*/
+
+void		_PG_init(void);
+void		_PG_fini(void);
+
+Datum		pg_stat_statements_reset(PG_FUNCTION_ARGS);
+Datum		pg_stat_statements(PG_FUNCTION_ARGS);
+
+PG_FUNCTION_INFO_V1(pg_stat_statements_reset);
+PG_FUNCTION_INFO_V1(pg_stat_statements);
+
+static void pgss_shmem_startup(void);
+static void pgss_shmem_shutdown(int code, Datum arg);
+static void pgss_ExecutorStart(QueryDesc *queryDesc, int eflags);
+static void pgss_ExecutorRun(QueryDesc *queryDesc,
+				 ScanDirection direction,
+				 long count);
+static void pgss_ExecutorFinish(QueryDesc *queryDesc);
+static void pgss_ExecutorEnd(QueryDesc *queryDesc);
+static void pgss_ProcessUtility(Node *parsetree,
+			  const char *queryString, ParamListInfo params, bool isTopLevel,
+					DestReceiver *dest, char *completionTag);
+static uint32 pgss_hash_fn(const void *key, Size keysize);
+static int	pgss_match_fn(const void *key1, const void *key2, Size keysize);
+static void pgss_store(const char *query, double total_time, uint64 rows,
+		   const BufferUsage *bufusage);
+static Size pgss_memsize(void);
+static pgssEntry *entry_alloc(pgssHashKey *key);
+static void entry_dealloc(void);
+static void entry_reset(void);
+
+
+/*
+ * Module load callback
+ */
+void
+_PG_init(void)
+{
+	/*
+	 * In order to create our shared memory area, we have to be loaded via
+	 * shared_preload_libraries.  If not, fall out without hooking into any of
+	 * the main system.  (We don't throw error here because it seems useful to
+	 * allow the pg_stat_statements functions to be created even when the
+	 * module isn't active.  The functions must protect themselves against
+	 * being called then, however.)
+	 */
+	if (!process_shared_preload_libraries_in_progress)
+		return;
+
+	/*
+	 * Define (or redefine) custom GUC variables.
+	 */
+	DefineCustomIntVariable("pg_stat_statements.max",
+	  "Sets the maximum number of statements tracked by pg_stat_statements.",
+							NULL,
+							&pgss_max,
+							1000,
+							100,
+							INT_MAX,
+							PGC_POSTMASTER,
+							0,
+							NULL,
+							NULL,
+							NULL);
+
+	DefineCustomEnumVariable("pg_stat_statements.track",
+			   "Selects which statements are tracked by pg_stat_statements.",
+							 NULL,
+							 &pgss_track,
+							 PGSS_TRACK_TOP,
+							 track_options,
+							 PGC_SUSET,
+							 0,
+							 NULL,
+							 NULL,
+							 NULL);
+
+	DefineCustomBoolVariable("pg_stat_statements.track_utility",
+	   "Selects whether utility commands are tracked by pg_stat_statements.",
+							 NULL,
+							 &pgss_track_utility,
+							 true,
+							 PGC_SUSET,
+							 0,
+							 NULL,
+							 NULL,
+							 NULL);
+
+	DefineCustomBoolVariable("pg_stat_statements.save",
+			   "Save pg_stat_statements statistics across server shutdowns.",
+							 NULL,
+							 &pgss_save,
+							 true,
+							 PGC_SIGHUP,
+							 0,
+							 NULL,
+							 NULL,
+							 NULL);
+
+	EmitWarningsOnPlaceholders("pg_stat_statements");
+
+	/*
+	 * Request additional shared resources.  (These are no-ops if we're not in
+	 * the postmaster process.)  We'll allocate or attach to the shared
+	 * resources in pgss_shmem_startup().
+	 */
+	RequestAddinShmemSpace(pgss_memsize());
+	RequestAddinLWLocks(1);
+
+	/*
+	 * Install hooks.
+	 */
+	prev_shmem_startup_hook = shmem_startup_hook;
+	shmem_startup_hook = pgss_shmem_startup;
+	prev_ExecutorStart = ExecutorStart_hook;
+	ExecutorStart_hook = pgss_ExecutorStart;
+	prev_ExecutorRun = ExecutorRun_hook;
+	ExecutorRun_hook = pgss_ExecutorRun;
+	prev_ExecutorFinish = ExecutorFinish_hook;
+	ExecutorFinish_hook = pgss_ExecutorFinish;
+	prev_ExecutorEnd = ExecutorEnd_hook;
+	ExecutorEnd_hook = pgss_ExecutorEnd;
+	prev_ProcessUtility = ProcessUtility_hook;
+	ProcessUtility_hook = pgss_ProcessUtility;
+}
+
+/*
+ * Module unload callback
+ */
+void
+_PG_fini(void)
+{
+	/* Uninstall hooks. */
+	shmem_startup_hook = prev_shmem_startup_hook;
+	ExecutorStart_hook = prev_ExecutorStart;
+	ExecutorRun_hook = prev_ExecutorRun;
+	ExecutorFinish_hook = prev_ExecutorFinish;
+	ExecutorEnd_hook = prev_ExecutorEnd;
+	ProcessUtility_hook = prev_ProcessUtility;
+}
+
+/*
+ * shmem_startup hook: allocate or attach to shared memory,
+ * then load any pre-existing statistics from file.
+ */
+static void
+pgss_shmem_startup(void)
+{
+	bool		found;
+	HASHCTL		info;
+	FILE	   *file;
+	uint32		header;
+	int32		num;
+	int32		i;
+	int			query_size;
+	int			buffer_size;
+	char	   *buffer = NULL;
+
+	if (prev_shmem_startup_hook)
+		prev_shmem_startup_hook();
+
+	/* reset in case this is a restart within the postmaster */
+	pgss = NULL;
+	pgss_hash = NULL;
+
+	/*
+	 * Create or attach to the shared memory state, including hash table
+	 */
+	LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
+
+	pgss = ShmemInitStruct("pg_stat_statements",
+						   sizeof(pgssSharedState),
+						   &found);
+
+	if (!found)
+	{
+		/* First time through ... */
+		pgss->lock = LWLockAssign();
+		pgss->query_size = pgstat_track_activity_query_size;
+	}
+
+	/* Be sure everyone agrees on the hash table entry size */
+	query_size = pgss->query_size;
+
+	memset(&info, 0, sizeof(info));
+	info.keysize = sizeof(pgssHashKey);
+	info.entrysize = offsetof(pgssEntry, query) +query_size;
+	info.hash = pgss_hash_fn;
+	info.match = pgss_match_fn;
+	pgss_hash = ShmemInitHash("pg_stat_statements hash",
+							  pgss_max, pgss_max,
+							  &info,
+							  HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+
+	LWLockRelease(AddinShmemInitLock);
+
+	/*
+	 * If we're in the postmaster (or a standalone backend...), set up a shmem
+	 * exit hook to dump the statistics to disk.
+	 */
+	if (!IsUnderPostmaster)
+		on_shmem_exit(pgss_shmem_shutdown, (Datum) 0);
+
+	/*
+	 * Attempt to load old statistics from the dump file, if this is the first
+	 * time through and we weren't told not to.
+	 */
+	if (found || !pgss_save)
+		return;
+
+	/*
+	 * Note: we don't bother with locks here, because there should be no other
+	 * processes running when this code is reached.
+	 */
+	file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_R);
+	if (file == NULL)
+	{
+		if (errno == ENOENT)
+			return;				/* ignore not-found error */
+		goto error;
+	}
+
+	buffer_size = query_size;
+	buffer = (char *) palloc(buffer_size);
+
+	if (fread(&header, sizeof(uint32), 1, file) != 1 ||
+		header != PGSS_FILE_HEADER ||
+		fread(&num, sizeof(int32), 1, file) != 1)
+		goto error;
+
+	for (i = 0; i < num; i++)
+	{
+		pgssEntry	temp;
+		pgssEntry  *entry;
+
+		if (fread(&temp, offsetof(pgssEntry, mutex), 1, file) != 1)
+			goto error;
+
+		/* Encoding is the only field we can easily sanity-check */
+		if (!PG_VALID_BE_ENCODING(temp.key.encoding))
+			goto error;
+
+		/* Previous incarnation might have had a larger query_size */
+		if (temp.key.query_len >= buffer_size)
+		{
+			buffer = (char *) repalloc(buffer, temp.key.query_len + 1);
+			buffer_size = temp.key.query_len + 1;
+		}
+
+		if (fread(buffer, 1, temp.key.query_len, file) != temp.key.query_len)
+			goto error;
+		buffer[temp.key.query_len] = '\0';
+
+		/* Clip to available length if needed */
+		if (temp.key.query_len >= query_size)
+			temp.key.query_len = pg_encoding_mbcliplen(temp.key.encoding,
+													   buffer,
+													   temp.key.query_len,
+													   query_size - 1);
+		temp.key.query_ptr = buffer;
+
+		/* make the hashtable entry (discards old entries if too many) */
+		entry = entry_alloc(&temp.key);
+
+		/* copy in the actual stats */
+		entry->counters = temp.counters;
+	}
+
+	pfree(buffer);
+	FreeFile(file);
+	return;
+
+error:
+	ereport(LOG,
+			(errcode_for_file_access(),
+			 errmsg("could not read pg_stat_statement file \"%s\": %m",
+					PGSS_DUMP_FILE)));
+	if (buffer)
+		pfree(buffer);
+	if (file)
+		FreeFile(file);
+	/* If possible, throw away the bogus file; ignore any error */
+	unlink(PGSS_DUMP_FILE);
+}
+
+/*
+ * shmem_shutdown hook: Dump statistics into file.
+ *
+ * Note: we don't bother with acquiring lock, because there should be no
+ * other processes running when this is called.
+ */
+static void
+pgss_shmem_shutdown(int code, Datum arg)
+{
+	FILE	   *file;
+	HASH_SEQ_STATUS hash_seq;
+	int32		num_entries;
+	pgssEntry  *entry;
+
+	/* Don't try to dump during a crash. */
+	if (code)
+		return;
+
+	/* Safety check ... shouldn't get here unless shmem is set up. */
+	if (!pgss || !pgss_hash)
+		return;
+
+	/* Don't dump if told not to. */
+	if (!pgss_save)
+		return;
+
+	file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_W);
+	if (file == NULL)
+		goto error;
+
+	if (fwrite(&PGSS_FILE_HEADER, sizeof(uint32), 1, file) != 1)
+		goto error;
+	num_entries = hash_get_num_entries(pgss_hash);
+	if (fwrite(&num_entries, sizeof(int32), 1, file) != 1)
+		goto error;
+
+	hash_seq_init(&hash_seq, pgss_hash);
+	while ((entry = hash_seq_search(&hash_seq)) != NULL)
+	{
+		int			len = entry->key.query_len;
+
+		if (fwrite(entry, offsetof(pgssEntry, mutex), 1, file) != 1 ||
+			fwrite(entry->query, 1, len, file) != len)
+			goto error;
+	}
+
+	if (FreeFile(file))
+	{
+		file = NULL;
+		goto error;
+	}
+
+	return;
+
+error:
+	ereport(LOG,
+			(errcode_for_file_access(),
+			 errmsg("could not write pg_stat_statement file \"%s\": %m",
+					PGSS_DUMP_FILE)));
+	if (file)
+		FreeFile(file);
+	unlink(PGSS_DUMP_FILE);
+}
+
+/*
+ * ExecutorStart hook: start up tracking if needed
+ */
+static void
+pgss_ExecutorStart(QueryDesc *queryDesc, int eflags)
+{
+	if (prev_ExecutorStart)
+		prev_ExecutorStart(queryDesc, eflags);
+	else
+		standard_ExecutorStart(queryDesc, eflags);
+
+	if (pgss_enabled())
+	{
+		/*
+		 * Set up to track total elapsed time in ExecutorRun.  Make sure the
+		 * space is allocated in the per-query context so it will go away at
+		 * ExecutorEnd.
+		 */
+		if (queryDesc->totaltime == NULL)
+		{
+			MemoryContext oldcxt;
+
+			oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
+			queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
+			MemoryContextSwitchTo(oldcxt);
+		}
+	}
+}
+
+/*
+ * ExecutorRun hook: all we need do is track nesting depth
+ */
+static void
+pgss_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
+{
+	nested_level++;
+	PG_TRY();
+	{
+		if (prev_ExecutorRun)
+			prev_ExecutorRun(queryDesc, direction, count);
+		else
+			standard_ExecutorRun(queryDesc, direction, count);
+		nested_level--;
+	}
+	PG_CATCH();
+	{
+		nested_level--;
+		PG_RE_THROW();
+	}
+	PG_END_TRY();
+}
+
+/*
+ * ExecutorFinish hook: all we need do is track nesting depth
+ */
+static void
+pgss_ExecutorFinish(QueryDesc *queryDesc)
+{
+	nested_level++;
+	PG_TRY();
+	{
+		if (prev_ExecutorFinish)
+			prev_ExecutorFinish(queryDesc);
+		else
+			standard_ExecutorFinish(queryDesc);
+		nested_level--;
+	}
+	PG_CATCH();
+	{
+		nested_level--;
+		PG_RE_THROW();
+	}
+	PG_END_TRY();
+}
+
+/*
+ * ExecutorEnd hook: store results if needed
+ */
+static void
+pgss_ExecutorEnd(QueryDesc *queryDesc)
+{
+	if (queryDesc->totaltime && pgss_enabled())
+	{
+		/*
+		 * Make sure stats accumulation is done.  (Note: it's okay if several
+		 * levels of hook all do this.)
+		 */
+		InstrEndLoop(queryDesc->totaltime);
+
+		pgss_store(queryDesc->sourceText,
+				   queryDesc->totaltime->total,
+				   queryDesc->estate->es_processed,
+				   &queryDesc->totaltime->bufusage);
+	}
+
+	if (prev_ExecutorEnd)
+		prev_ExecutorEnd(queryDesc);
+	else
+		standard_ExecutorEnd(queryDesc);
+}
+
+/*
+ * ProcessUtility hook
+ */
+static void
+pgss_ProcessUtility(Node *parsetree, const char *queryString,
+					ParamListInfo params, bool isTopLevel,
+					DestReceiver *dest, char *completionTag)
+{
+	if (pgss_track_utility && pgss_enabled())
+	{
+		instr_time	start;
+		instr_time	duration;
+		uint64		rows = 0;
+		BufferUsage bufusage;
+
+		bufusage = pgBufferUsage;
+		INSTR_TIME_SET_CURRENT(start);
+
+		nested_level++;
+		PG_TRY();
+		{
+			if (prev_ProcessUtility)
+				prev_ProcessUtility(parsetree, queryString, params,
+									isTopLevel, dest, completionTag);
+			else
+				standard_ProcessUtility(parsetree, queryString, params,
+										isTopLevel, dest, completionTag);
+			nested_level--;
+		}
+		PG_CATCH();
+		{
+			nested_level--;
+			PG_RE_THROW();
+		}
+		PG_END_TRY();
+
+		INSTR_TIME_SET_CURRENT(duration);
+		INSTR_TIME_SUBTRACT(duration, start);
+
+		/* parse command tag to retrieve the number of affected rows. */
+		if (completionTag &&
+			sscanf(completionTag, "COPY " UINT64_FORMAT, &rows) != 1)
+			rows = 0;
+
+		/* calc differences of buffer counters. */
+		bufusage.shared_blks_hit =
+			pgBufferUsage.shared_blks_hit - bufusage.shared_blks_hit;
+		bufusage.shared_blks_read =
+			pgBufferUsage.shared_blks_read - bufusage.shared_blks_read;
+		bufusage.shared_blks_written =
+			pgBufferUsage.shared_blks_written - bufusage.shared_blks_written;
+		bufusage.local_blks_hit =
+			pgBufferUsage.local_blks_hit - bufusage.local_blks_hit;
+		bufusage.local_blks_read =
+			pgBufferUsage.local_blks_read - bufusage.local_blks_read;
+		bufusage.local_blks_written =
+			pgBufferUsage.local_blks_written - bufusage.local_blks_written;
+		bufusage.temp_blks_read =
+			pgBufferUsage.temp_blks_read - bufusage.temp_blks_read;
+		bufusage.temp_blks_written =
+			pgBufferUsage.temp_blks_written - bufusage.temp_blks_written;
+
+		pgss_store(queryString, INSTR_TIME_GET_DOUBLE(duration), rows,
+				   &bufusage);
+	}
+	else
+	{
+		if (prev_ProcessUtility)
+			prev_ProcessUtility(parsetree, queryString, params,
+								isTopLevel, dest, completionTag);
+		else
+			standard_ProcessUtility(parsetree, queryString, params,
+									isTopLevel, dest, completionTag);
+	}
+}
+
+/*
+ * Calculate hash value for a key
+ */
+static uint32
+pgss_hash_fn(const void *key, Size keysize)
+{
+	const pgssHashKey *k = (const pgssHashKey *) key;
+
+	/* we don't bother to include encoding in the hash */
+	return hash_uint32((uint32) k->userid) ^
+		hash_uint32((uint32) k->dbid) ^
+		DatumGetUInt32(hash_any((const unsigned char *) k->query_ptr,
+								k->query_len));
+}
+
+/*
+ * Compare two keys - zero means match
+ */
+static int
+pgss_match_fn(const void *key1, const void *key2, Size keysize)
+{
+	const pgssHashKey *k1 = (const pgssHashKey *) key1;
+	const pgssHashKey *k2 = (const pgssHashKey *) key2;
+
+	if (k1->userid == k2->userid &&
+		k1->dbid == k2->dbid &&
+		k1->encoding == k2->encoding &&
+		k1->query_len == k2->query_len &&
+		memcmp(k1->query_ptr, k2->query_ptr, k1->query_len) == 0)
+		return 0;
+	else
+		return 1;
+}
+
+/*
+ * Store some statistics for a statement.
+ */
+static void
+pgss_store(const char *query, double total_time, uint64 rows,
+		   const BufferUsage *bufusage)
+{
+	pgssHashKey key;
+	double		usage;
+	pgssEntry  *entry;
+
+	Assert(query != NULL);
+
+	/* Safety check... */
+	if (!pgss || !pgss_hash)
+		return;
+
+	/* Set up key for hashtable search */
+	key.userid = GetUserId();
+	key.dbid = MyDatabaseId;
+	key.encoding = GetDatabaseEncoding();
+	key.query_len = strlen(query);
+	if (key.query_len >= pgss->query_size)
+		key.query_len = pg_encoding_mbcliplen(key.encoding,
+											  query,
+											  key.query_len,
+											  pgss->query_size - 1);
+	key.query_ptr = query;
+
+	usage = USAGE_EXEC(duration);
+
+	/* Lookup the hash table entry with shared lock. */
+	LWLockAcquire(pgss->lock, LW_SHARED);
+
+	entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND, NULL);
+	if (!entry)
+	{
+		/* Must acquire exclusive lock to add a new entry. */
+		LWLockRelease(pgss->lock);
+		LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
+		entry = entry_alloc(&key);
+	}
+
+	/* Grab the spinlock while updating the counters. */
+	{
+		volatile pgssEntry *e = (volatile pgssEntry *) entry;
+
+		SpinLockAcquire(&e->mutex);
+		e->counters.calls += 1;
+		e->counters.total_time += total_time;
+		e->counters.rows += rows;
+		e->counters.shared_blks_hit += bufusage->shared_blks_hit;
+		e->counters.shared_blks_read += bufusage->shared_blks_read;
+		e->counters.shared_blks_written += bufusage->shared_blks_written;
+		e->counters.local_blks_hit += bufusage->local_blks_hit;
+		e->counters.local_blks_read += bufusage->local_blks_read;
+		e->counters.local_blks_written += bufusage->local_blks_written;
+		e->counters.temp_blks_read += bufusage->temp_blks_read;
+		e->counters.temp_blks_written += bufusage->temp_blks_written;
+		e->counters.usage += usage;
+		SpinLockRelease(&e->mutex);
+	}
+
+	LWLockRelease(pgss->lock);
+}
+
+/*
+ * Reset all statement statistics.
+ */
+Datum
+pg_stat_statements_reset(PG_FUNCTION_ARGS)
+{
+	if (!pgss || !pgss_hash)
+		ereport(ERROR,
+				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
+	entry_reset();
+	PG_RETURN_VOID();
+}
+
+#define PG_STAT_STATEMENTS_COLS		14
+
+/*
+ * Retrieve statement statistics.
+ */
+Datum
+pg_stat_statements(PG_FUNCTION_ARGS)
+{
+	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+	TupleDesc	tupdesc;
+	Tuplestorestate *tupstore;
+	MemoryContext per_query_ctx;
+	MemoryContext oldcontext;
+	Oid			userid = GetUserId();
+	bool		is_superuser = superuser();
+	HASH_SEQ_STATUS hash_seq;
+	pgssEntry  *entry;
+
+	if (!pgss || !pgss_hash)
+		ereport(ERROR,
+				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+				 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
+
+	/* check to see if caller supports us returning a tuplestore */
+	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("set-valued function called in context that cannot accept a set")));
+	if (!(rsinfo->allowedModes & SFRM_Materialize))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("materialize mode required, but it is not " \
+						"allowed in this context")));
+
+	/* Build a tuple descriptor for our result type */
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+
+	tupstore = tuplestore_begin_heap(true, false, work_mem);
+	rsinfo->returnMode = SFRM_Materialize;
+	rsinfo->setResult = tupstore;
+	rsinfo->setDesc = tupdesc;
+
+	MemoryContextSwitchTo(oldcontext);
+
+	LWLockAcquire(pgss->lock, LW_SHARED);
+
+	hash_seq_init(&hash_seq, pgss_hash);
+	while ((entry = hash_seq_search(&hash_seq)) != NULL)
+	{
+		Datum		values[PG_STAT_STATEMENTS_COLS];
+		bool		nulls[PG_STAT_STATEMENTS_COLS];
+		int			i = 0;
+		Counters	tmp;
+
+		memset(values, 0, sizeof(values));
+		memset(nulls, 0, sizeof(nulls));
+
+		values[i++] = ObjectIdGetDatum(entry->key.userid);
+		values[i++] = ObjectIdGetDatum(entry->key.dbid);
+
+		if (is_superuser || entry->key.userid == userid)
+		{
+			char	   *qstr;
+
+			qstr = (char *)
+				pg_do_encoding_conversion((unsigned char *) entry->query,
+										  entry->key.query_len,
+										  entry->key.encoding,
+										  GetDatabaseEncoding());
+			values[i++] = CStringGetTextDatum(qstr);
+			if (qstr != entry->query)
+				pfree(qstr);
+		}
+		else
+			values[i++] = CStringGetTextDatum("<insufficient privilege>");
+
+		/* copy counters to a local variable to keep locking time short */
+		{
+			volatile pgssEntry *e = (volatile pgssEntry *) entry;
+
+			SpinLockAcquire(&e->mutex);
+			tmp = e->counters;
+			SpinLockRelease(&e->mutex);
+		}
+
+		values[i++] = Int64GetDatumFast(tmp.calls);
+		values[i++] = Float8GetDatumFast(tmp.total_time);
+		values[i++] = Int64GetDatumFast(tmp.rows);
+		values[i++] = Int64GetDatumFast(tmp.shared_blks_hit);
+		values[i++] = Int64GetDatumFast(tmp.shared_blks_read);
+		values[i++] = Int64GetDatumFast(tmp.shared_blks_written);
+		values[i++] = Int64GetDatumFast(tmp.local_blks_hit);
+		values[i++] = Int64GetDatumFast(tmp.local_blks_read);
+		values[i++] = Int64GetDatumFast(tmp.local_blks_written);
+		values[i++] = Int64GetDatumFast(tmp.temp_blks_read);
+		values[i++] = Int64GetDatumFast(tmp.temp_blks_written);
+
+		Assert(i == PG_STAT_STATEMENTS_COLS);
+
+		tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+	}
+
+	LWLockRelease(pgss->lock);
+
+	/* clean up and return the tuplestore */
+	tuplestore_donestoring(tupstore);
+
+	return (Datum) 0;
+}
+
+/*
+ * Estimate shared memory space needed.
+ */
+static Size
+pgss_memsize(void)
+{
+	Size		size;
+	Size		entrysize;
+
+	size = MAXALIGN(sizeof(pgssSharedState));
+	entrysize = offsetof(pgssEntry, query) +pgstat_track_activity_query_size;
+	size = add_size(size, hash_estimate_size(pgss_max, entrysize));
+
+	return size;
+}
+
+/*
+ * Allocate a new hashtable entry.
+ * caller must hold an exclusive lock on pgss->lock
+ *
+ * Note: despite needing exclusive lock, it's not an error for the target
+ * entry to already exist.	This is because pgss_store releases and
+ * reacquires lock after failing to find a match; so someone else could
+ * have made the entry while we waited to get exclusive lock.
+ */
+static pgssEntry *
+entry_alloc(pgssHashKey *key)
+{
+	pgssEntry  *entry;
+	bool		found;
+
+	/* Caller must have clipped query properly */
+	Assert(key->query_len < pgss->query_size);
+
+	/* Make space if needed */
+	while (hash_get_num_entries(pgss_hash) >= pgss_max)
+		entry_dealloc();
+
+	/* Find or create an entry with desired hash code */
+	entry = (pgssEntry *) hash_search(pgss_hash, key, HASH_ENTER, &found);
+
+	if (!found)
+	{
+		/* New entry, initialize it */
+
+		/* dynahash tried to copy the key for us, but must fix query_ptr */
+		entry->key.query_ptr = entry->query;
+		/* reset the statistics */
+		memset(&entry->counters, 0, sizeof(Counters));
+		entry->counters.usage = USAGE_INIT;
+		/* re-initialize the mutex each time ... we assume no one using it */
+		SpinLockInit(&entry->mutex);
+		/* ... and don't forget the query text */
+		memcpy(entry->query, key->query_ptr, key->query_len);
+		entry->query[key->query_len] = '\0';
+	}
+
+	return entry;
+}
+
+/*
+ * qsort comparator for sorting into increasing usage order
+ */
+static int
+entry_cmp(const void *lhs, const void *rhs)
+{
+	double		l_usage = (*(const pgssEntry **) lhs)->counters.usage;
+	double		r_usage = (*(const pgssEntry **) rhs)->counters.usage;
+
+	if (l_usage < r_usage)
+		return -1;
+	else if (l_usage > r_usage)
+		return +1;
+	else
+		return 0;
+}
+
+/*
+ * Deallocate least used entries.
+ * Caller must hold an exclusive lock on pgss->lock.
+ */
+static void
+entry_dealloc(void)
+{
+	HASH_SEQ_STATUS hash_seq;
+	pgssEntry **entries;
+	pgssEntry  *entry;
+	int			nvictims;
+	int			i;
+
+	/* Sort entries by usage and deallocate USAGE_DEALLOC_PERCENT of them. */
+
+	entries = palloc(hash_get_num_entries(pgss_hash) * sizeof(pgssEntry *));
+
+	i = 0;
+	hash_seq_init(&hash_seq, pgss_hash);
+	while ((entry = hash_seq_search(&hash_seq)) != NULL)
+	{
+		entries[i++] = entry;
+		entry->counters.usage *= USAGE_DECREASE_FACTOR;
+	}
+
+	qsort(entries, i, sizeof(pgssEntry *), entry_cmp);
+	nvictims = Max(10, i * USAGE_DEALLOC_PERCENT / 100);
+	nvictims = Min(nvictims, i);
+
+	for (i = 0; i < nvictims; i++)
+	{
+		hash_search(pgss_hash, &entries[i]->key, HASH_REMOVE, NULL);
+	}
+
+	pfree(entries);
+}
+
+/*
+ * Release all entries.
+ */
+static void
+entry_reset(void)
+{
+	HASH_SEQ_STATUS hash_seq;
+	pgssEntry  *entry;
+
+	LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
+
+	hash_seq_init(&hash_seq, pgss_hash);
+	while ((entry = hash_seq_search(&hash_seq)) != NULL)
+	{
+		hash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);
+	}
+
+	LWLockRelease(pgss->lock);
+}
diff --git a/src/extension/pg_stat_statements/pg_stat_statements.control b/src/extension/pg_stat_statements/pg_stat_statements.control
new file mode 100644
index 0000000..6f9a947
--- /dev/null
+++ b/src/extension/pg_stat_statements/pg_stat_statements.control
@@ -0,0 +1,5 @@
+# pg_stat_statements extension
+comment = 'track execution statistics of all SQL statements executed'
+default_version = '1.0'
+module_pathname = '$libdir/pg_stat_statements'
+relocatable = true
diff --git a/src/extension/pgrowlocks/Makefile b/src/extension/pgrowlocks/Makefile
new file mode 100644
index 0000000..a4191fb
--- /dev/null
+++ b/src/extension/pgrowlocks/Makefile
@@ -0,0 +1,19 @@
+# contrib/pgrowlocks/Makefile
+
+MODULE_big	= pgrowlocks
+OBJS		= pgrowlocks.o
+MODULEDIR   = extension
+
+EXTENSION = pgrowlocks
+DATA = pgrowlocks--1.0.sql pgrowlocks--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pgrowlocks
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pgrowlocks/pgrowlocks--1.0.sql b/src/extension/pgrowlocks/pgrowlocks--1.0.sql
new file mode 100644
index 0000000..0b60fdc
--- /dev/null
+++ b/src/extension/pgrowlocks/pgrowlocks--1.0.sql
@@ -0,0 +1,12 @@
+/* contrib/pgrowlocks/pgrowlocks--1.0.sql */
+
+CREATE FUNCTION pgrowlocks(IN relname text,
+    OUT locked_row TID,		-- row TID
+    OUT lock_type TEXT,		-- lock type
+    OUT locker XID,		-- locking XID
+    OUT multi bool,		-- multi XID?
+    OUT xids xid[],		-- multi XIDs
+    OUT pids INTEGER[])		-- locker's process id
+RETURNS SETOF record
+AS 'MODULE_PATHNAME', 'pgrowlocks'
+LANGUAGE C STRICT;
diff --git a/src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql b/src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
new file mode 100644
index 0000000..90d7088
--- /dev/null
+++ b/src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
@@ -0,0 +1,3 @@
+/* src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql */
+
+ALTER EXTENSION pgrowlocks ADD function pgrowlocks(text);
diff --git a/src/extension/pgrowlocks/pgrowlocks.c b/src/extension/pgrowlocks/pgrowlocks.c
new file mode 100644
index 0000000..aa41491
--- /dev/null
+++ b/src/extension/pgrowlocks/pgrowlocks.c
@@ -0,0 +1,220 @@
+/*
+ * src/extension/pgrowlocks/pgrowlocks.c
+ *
+ * Copyright (c) 2005-2006	Tatsuo Ishii
+ *
+ * Permission to use, copy, modify, and distribute this software and
+ * its documentation for any purpose, without fee, and without a
+ * written agreement is hereby granted, provided that the above
+ * copyright notice and this paragraph and the following two
+ * paragraphs appear in all copies.
+ *
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+ * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+ * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+ * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+ * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+ * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+ */
+
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "access/multixact.h"
+#include "access/relscan.h"
+#include "access/xact.h"
+#include "catalog/namespace.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "storage/bufmgr.h"
+#include "storage/procarray.h"
+#include "utils/acl.h"
+#include "utils/builtins.h"
+#include "utils/tqual.h"
+
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(pgrowlocks);
+
+extern Datum pgrowlocks(PG_FUNCTION_ARGS);
+
+/* ----------
+ * pgrowlocks:
+ * returns tids of rows being locked
+ * ----------
+ */
+
+#define NCHARS 32
+
+typedef struct
+{
+	Relation	rel;
+	HeapScanDesc scan;
+	int			ncolumns;
+} MyData;
+
+Datum
+pgrowlocks(PG_FUNCTION_ARGS)
+{
+	FuncCallContext *funcctx;
+	HeapScanDesc scan;
+	HeapTuple	tuple;
+	TupleDesc	tupdesc;
+	AttInMetadata *attinmeta;
+	Datum		result;
+	MyData	   *mydata;
+	Relation	rel;
+
+	if (SRF_IS_FIRSTCALL())
+	{
+		text	   *relname;
+		RangeVar   *relrv;
+		MemoryContext oldcontext;
+		AclResult	aclresult;
+
+		funcctx = SRF_FIRSTCALL_INIT();
+		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+		/* Build a tuple descriptor for our result type */
+		if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+			elog(ERROR, "return type must be a row type");
+
+		attinmeta = TupleDescGetAttInMetadata(tupdesc);
+		funcctx->attinmeta = attinmeta;
+
+		relname = PG_GETARG_TEXT_P(0);
+		relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+		rel = heap_openrv(relrv, AccessShareLock);
+
+		/* check permissions: must have SELECT on table */
+		aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(),
+									  ACL_SELECT);
+		if (aclresult != ACLCHECK_OK)
+			aclcheck_error(aclresult, ACL_KIND_CLASS,
+						   RelationGetRelationName(rel));
+
+		scan = heap_beginscan(rel, SnapshotNow, 0, NULL);
+		mydata = palloc(sizeof(*mydata));
+		mydata->rel = rel;
+		mydata->scan = scan;
+		mydata->ncolumns = tupdesc->natts;
+		funcctx->user_fctx = mydata;
+
+		MemoryContextSwitchTo(oldcontext);
+	}
+
+	funcctx = SRF_PERCALL_SETUP();
+	attinmeta = funcctx->attinmeta;
+	mydata = (MyData *) funcctx->user_fctx;
+	scan = mydata->scan;
+
+	/* scan the relation */
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		/* must hold a buffer lock to call HeapTupleSatisfiesUpdate */
+		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
+
+		if (HeapTupleSatisfiesUpdate(tuple->t_data,
+									 GetCurrentCommandId(false),
+									 scan->rs_cbuf) == HeapTupleBeingUpdated)
+		{
+
+			char	  **values;
+			int			i;
+
+			values = (char **) palloc(mydata->ncolumns * sizeof(char *));
+
+			i = 0;
+			values[i++] = (char *) DirectFunctionCall1(tidout, PointerGetDatum(&tuple->t_self));
+
+			if (tuple->t_data->t_infomask & HEAP_XMAX_SHARED_LOCK)
+				values[i++] = pstrdup("Shared");
+			else
+				values[i++] = pstrdup("Exclusive");
+			values[i] = palloc(NCHARS * sizeof(char));
+			snprintf(values[i++], NCHARS, "%d", HeapTupleHeaderGetXmax(tuple->t_data));
+			if (tuple->t_data->t_infomask & HEAP_XMAX_IS_MULTI)
+			{
+				TransactionId *xids;
+				int			nxids;
+				int			j;
+				int			isValidXid = 0;		/* any valid xid ever exists? */
+
+				values[i++] = pstrdup("true");
+				nxids = GetMultiXactIdMembers(HeapTupleHeaderGetXmax(tuple->t_data), &xids);
+				if (nxids == -1)
+				{
+					elog(ERROR, "GetMultiXactIdMembers returns error");
+				}
+
+				values[i] = palloc(NCHARS * nxids);
+				values[i + 1] = palloc(NCHARS * nxids);
+				strcpy(values[i], "{");
+				strcpy(values[i + 1], "{");
+
+				for (j = 0; j < nxids; j++)
+				{
+					char		buf[NCHARS];
+
+					if (TransactionIdIsInProgress(xids[j]))
+					{
+						if (isValidXid)
+						{
+							strcat(values[i], ",");
+							strcat(values[i + 1], ",");
+						}
+						snprintf(buf, NCHARS, "%d", xids[j]);
+						strcat(values[i], buf);
+						snprintf(buf, NCHARS, "%d", BackendXidGetPid(xids[j]));
+						strcat(values[i + 1], buf);
+
+						isValidXid = 1;
+					}
+				}
+
+				strcat(values[i], "}");
+				strcat(values[i + 1], "}");
+				i++;
+			}
+			else
+			{
+				values[i++] = pstrdup("false");
+				values[i] = palloc(NCHARS * sizeof(char));
+				snprintf(values[i++], NCHARS, "{%d}", HeapTupleHeaderGetXmax(tuple->t_data));
+
+				values[i] = palloc(NCHARS * sizeof(char));
+				snprintf(values[i++], NCHARS, "{%d}", BackendXidGetPid(HeapTupleHeaderGetXmax(tuple->t_data)));
+			}
+
+			LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
+
+			/* build a tuple */
+			tuple = BuildTupleFromCStrings(attinmeta, values);
+
+			/* make the tuple into a datum */
+			result = HeapTupleGetDatum(tuple);
+
+			/* Clean up */
+			for (i = 0; i < mydata->ncolumns; i++)
+				pfree(values[i]);
+			pfree(values);
+
+			SRF_RETURN_NEXT(funcctx, result);
+		}
+		else
+		{
+			LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+	}
+
+	heap_endscan(scan);
+	heap_close(mydata->rel, AccessShareLock);
+
+	SRF_RETURN_DONE(funcctx);
+}
diff --git a/src/extension/pgrowlocks/pgrowlocks.control b/src/extension/pgrowlocks/pgrowlocks.control
new file mode 100644
index 0000000..a6ba164
--- /dev/null
+++ b/src/extension/pgrowlocks/pgrowlocks.control
@@ -0,0 +1,5 @@
+# pgrowlocks extension
+comment = 'show row-level locking information'
+default_version = '1.0'
+module_pathname = '$libdir/pgrowlocks'
+relocatable = true
diff --git a/src/extension/pgstattuple/Makefile b/src/extension/pgstattuple/Makefile
new file mode 100644
index 0000000..296ca57
--- /dev/null
+++ b/src/extension/pgstattuple/Makefile
@@ -0,0 +1,19 @@
+# src/extension/pgstattuple/Makefile
+
+MODULE_big	= pgstattuple
+OBJS		= pgstattuple.o pgstatindex.o
+MODULEDIR   = extension
+
+EXTENSION = pgstattuple
+DATA = pgstattuple--1.0.sql pgstattuple--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pgstattuple
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pgstattuple/pgstatindex.c b/src/extension/pgstattuple/pgstatindex.c
new file mode 100644
index 0000000..77ca208
--- /dev/null
+++ b/src/extension/pgstattuple/pgstatindex.c
@@ -0,0 +1,282 @@
+/*
+ * src/extension/pgstattuple/pgstatindex.c
+ *
+ *
+ * pgstatindex
+ *
+ * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
+ *
+ * Permission to use, copy, modify, and distribute this software and
+ * its documentation for any purpose, without fee, and without a
+ * written agreement is hereby granted, provided that the above
+ * copyright notice and this paragraph and the following two
+ * paragraphs appear in all copies.
+ *
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+ * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+ * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+ * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+ * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+ * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+ */
+
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "access/nbtree.h"
+#include "catalog/namespace.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "storage/bufmgr.h"
+#include "utils/builtins.h"
+
+
+extern Datum pgstatindex(PG_FUNCTION_ARGS);
+extern Datum pg_relpages(PG_FUNCTION_ARGS);
+
+PG_FUNCTION_INFO_V1(pgstatindex);
+PG_FUNCTION_INFO_V1(pg_relpages);
+
+#define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
+#define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
+
+#define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
+		if ( !(FirstOffsetNumber <= (offnum) && \
+						(offnum) <= PageGetMaxOffsetNumber(pg)) ) \
+			 elog(ERROR, "page offset number out of range"); }
+
+/* note: BlockNumber is unsigned, hence can't be negative */
+#define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
+		if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
+			 elog(ERROR, "block number out of range"); }
+
+/* ------------------------------------------------
+ * A structure for a whole btree index statistics
+ * used by pgstatindex().
+ * ------------------------------------------------
+ */
+typedef struct BTIndexStat
+{
+	uint32		version;
+	uint32		level;
+	BlockNumber root_blkno;
+
+	uint64		root_pages;
+	uint64		internal_pages;
+	uint64		leaf_pages;
+	uint64		empty_pages;
+	uint64		deleted_pages;
+
+	uint64		max_avail;
+	uint64		free_space;
+
+	uint64		fragments;
+} BTIndexStat;
+
+/* ------------------------------------------------------
+ * pgstatindex()
+ *
+ * Usage: SELECT * FROM pgstatindex('t1_pkey');
+ * ------------------------------------------------------
+ */
+Datum
+pgstatindex(PG_FUNCTION_ARGS)
+{
+	text	   *relname = PG_GETARG_TEXT_P(0);
+	Relation	rel;
+	RangeVar   *relrv;
+	Datum		result;
+	BlockNumber nblocks;
+	BlockNumber blkno;
+	BTIndexStat indexStat;
+
+	if (!superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 (errmsg("must be superuser to use pgstattuple functions"))));
+
+	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+	rel = relation_openrv(relrv, AccessShareLock);
+
+	if (!IS_INDEX(rel) || !IS_BTREE(rel))
+		elog(ERROR, "relation \"%s\" is not a btree index",
+			 RelationGetRelationName(rel));
+
+	/*
+	 * Reject attempts to read non-local temporary relations; we would be
+	 * likely to get wrong data since we have no visibility into the owning
+	 * session's local buffers.
+	 */
+	if (RELATION_IS_OTHER_TEMP(rel))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("cannot access temporary tables of other sessions")));
+
+	/*
+	 * Read metapage
+	 */
+	{
+		Buffer		buffer = ReadBuffer(rel, 0);
+		Page		page = BufferGetPage(buffer);
+		BTMetaPageData *metad = BTPageGetMeta(page);
+
+		indexStat.version = metad->btm_version;
+		indexStat.level = metad->btm_level;
+		indexStat.root_blkno = metad->btm_root;
+
+		ReleaseBuffer(buffer);
+	}
+
+	/* -- init counters -- */
+	indexStat.root_pages = 0;
+	indexStat.internal_pages = 0;
+	indexStat.leaf_pages = 0;
+	indexStat.empty_pages = 0;
+	indexStat.deleted_pages = 0;
+
+	indexStat.max_avail = 0;
+	indexStat.free_space = 0;
+
+	indexStat.fragments = 0;
+
+	/*
+	 * Scan all blocks except the metapage
+	 */
+	nblocks = RelationGetNumberOfBlocks(rel);
+
+	for (blkno = 1; blkno < nblocks; blkno++)
+	{
+		Buffer		buffer;
+		Page		page;
+		BTPageOpaque opaque;
+
+		/* Read and lock buffer */
+		buffer = ReadBuffer(rel, blkno);
+		LockBuffer(buffer, BUFFER_LOCK_SHARE);
+
+		page = BufferGetPage(buffer);
+		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
+
+		/* Determine page type, and update totals */
+
+		if (P_ISLEAF(opaque))
+		{
+			int			max_avail;
+
+			max_avail = BLCKSZ - (BLCKSZ - ((PageHeader) page)->pd_special + SizeOfPageHeaderData);
+			indexStat.max_avail += max_avail;
+			indexStat.free_space += PageGetFreeSpace(page);
+
+			indexStat.leaf_pages++;
+
+			/*
+			 * If the next leaf is on an earlier block, it means a
+			 * fragmentation.
+			 */
+			if (opaque->btpo_next != P_NONE && opaque->btpo_next < blkno)
+				indexStat.fragments++;
+		}
+		else if (P_ISDELETED(opaque))
+			indexStat.deleted_pages++;
+		else if (P_IGNORE(opaque))
+			indexStat.empty_pages++;
+		else if (P_ISROOT(opaque))
+			indexStat.root_pages++;
+		else
+			indexStat.internal_pages++;
+
+		/* Unlock and release buffer */
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	relation_close(rel, AccessShareLock);
+
+	/*----------------------------
+	 * Build a result tuple
+	 *----------------------------
+	 */
+	{
+		TupleDesc	tupleDesc;
+		int			j;
+		char	   *values[10];
+		HeapTuple	tuple;
+
+		/* Build a tuple descriptor for our result type */
+		if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+			elog(ERROR, "return type must be a row type");
+
+		j = 0;
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, "%d", indexStat.version);
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, "%d", indexStat.level);
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, INT64_FORMAT,
+				 (indexStat.root_pages +
+				  indexStat.leaf_pages +
+				  indexStat.internal_pages +
+				  indexStat.deleted_pages +
+				  indexStat.empty_pages) * BLCKSZ);
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, "%u", indexStat.root_blkno);
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, INT64_FORMAT, indexStat.internal_pages);
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, INT64_FORMAT, indexStat.leaf_pages);
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, INT64_FORMAT, indexStat.empty_pages);
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, INT64_FORMAT, indexStat.deleted_pages);
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, "%.2f", 100.0 - (double) indexStat.free_space / (double) indexStat.max_avail * 100.0);
+		values[j] = palloc(32);
+		snprintf(values[j++], 32, "%.2f", (double) indexStat.fragments / (double) indexStat.leaf_pages * 100.0);
+
+		tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
+									   values);
+
+		result = HeapTupleGetDatum(tuple);
+	}
+
+	PG_RETURN_DATUM(result);
+}
+
+/* --------------------------------------------------------
+ * pg_relpages()
+ *
+ * Get the number of pages of the table/index.
+ *
+ * Usage: SELECT pg_relpages('t1');
+ *		  SELECT pg_relpages('t1_pkey');
+ * --------------------------------------------------------
+ */
+Datum
+pg_relpages(PG_FUNCTION_ARGS)
+{
+	text	   *relname = PG_GETARG_TEXT_P(0);
+	int64		relpages;
+	Relation	rel;
+	RangeVar   *relrv;
+
+	if (!superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 (errmsg("must be superuser to use pgstattuple functions"))));
+
+	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+	rel = relation_openrv(relrv, AccessShareLock);
+
+	/* note: this will work OK on non-local temp tables */
+
+	relpages = RelationGetNumberOfBlocks(rel);
+
+	relation_close(rel, AccessShareLock);
+
+	PG_RETURN_INT64(relpages);
+}
diff --git a/src/extension/pgstattuple/pgstattuple--1.0.sql b/src/extension/pgstattuple/pgstattuple--1.0.sql
new file mode 100644
index 0000000..7b78905
--- /dev/null
+++ b/src/extension/pgstattuple/pgstattuple--1.0.sql
@@ -0,0 +1,46 @@
+/* src/extension/pgstattuple/pgstattuple--1.0.sql */
+
+CREATE FUNCTION pgstattuple(IN relname text,
+    OUT table_len BIGINT,		-- physical table length in bytes
+    OUT tuple_count BIGINT,		-- number of live tuples
+    OUT tuple_len BIGINT,		-- total tuples length in bytes
+    OUT tuple_percent FLOAT8,		-- live tuples in %
+    OUT dead_tuple_count BIGINT,	-- number of dead tuples
+    OUT dead_tuple_len BIGINT,		-- total dead tuples length in bytes
+    OUT dead_tuple_percent FLOAT8,	-- dead tuples in %
+    OUT free_space BIGINT,		-- free space in bytes
+    OUT free_percent FLOAT8)		-- free space in %
+AS 'MODULE_PATHNAME', 'pgstattuple'
+LANGUAGE C STRICT;
+
+CREATE FUNCTION pgstattuple(IN reloid oid,
+    OUT table_len BIGINT,		-- physical table length in bytes
+    OUT tuple_count BIGINT,		-- number of live tuples
+    OUT tuple_len BIGINT,		-- total tuples length in bytes
+    OUT tuple_percent FLOAT8,		-- live tuples in %
+    OUT dead_tuple_count BIGINT,	-- number of dead tuples
+    OUT dead_tuple_len BIGINT,		-- total dead tuples length in bytes
+    OUT dead_tuple_percent FLOAT8,	-- dead tuples in %
+    OUT free_space BIGINT,		-- free space in bytes
+    OUT free_percent FLOAT8)		-- free space in %
+AS 'MODULE_PATHNAME', 'pgstattuplebyid'
+LANGUAGE C STRICT;
+
+CREATE FUNCTION pgstatindex(IN relname text,
+    OUT version INT,
+    OUT tree_level INT,
+    OUT index_size BIGINT,
+    OUT root_block_no BIGINT,
+    OUT internal_pages BIGINT,
+    OUT leaf_pages BIGINT,
+    OUT empty_pages BIGINT,
+    OUT deleted_pages BIGINT,
+    OUT avg_leaf_density FLOAT8,
+    OUT leaf_fragmentation FLOAT8)
+AS 'MODULE_PATHNAME', 'pgstatindex'
+LANGUAGE C STRICT;
+
+CREATE FUNCTION pg_relpages(IN relname text)
+RETURNS BIGINT
+AS 'MODULE_PATHNAME', 'pg_relpages'
+LANGUAGE C STRICT;
diff --git a/src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql b/src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql
new file mode 100644
index 0000000..6a1474a
--- /dev/null
+++ b/src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql
@@ -0,0 +1,6 @@
+/* src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql */
+
+ALTER EXTENSION pgstattuple ADD function pgstattuple(text);
+ALTER EXTENSION pgstattuple ADD function pgstattuple(oid);
+ALTER EXTENSION pgstattuple ADD function pgstatindex(text);
+ALTER EXTENSION pgstattuple ADD function pg_relpages(text);
diff --git a/src/extension/pgstattuple/pgstattuple.c b/src/extension/pgstattuple/pgstattuple.c
new file mode 100644
index 0000000..76357ee
--- /dev/null
+++ b/src/extension/pgstattuple/pgstattuple.c
@@ -0,0 +1,518 @@
+/*
+ * src/extension/pgstattuple/pgstattuple.c
+ *
+ * Copyright (c) 2001,2002	Tatsuo Ishii
+ *
+ * Permission to use, copy, modify, and distribute this software and
+ * its documentation for any purpose, without fee, and without a
+ * written agreement is hereby granted, provided that the above
+ * copyright notice and this paragraph and the following two
+ * paragraphs appear in all copies.
+ *
+ * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+ * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+ * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+ * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+ * OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+ * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+ * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+ */
+
+#include "postgres.h"
+
+#include "access/gist_private.h"
+#include "access/hash.h"
+#include "access/nbtree.h"
+#include "access/relscan.h"
+#include "catalog/namespace.h"
+#include "funcapi.h"
+#include "miscadmin.h"
+#include "storage/bufmgr.h"
+#include "storage/lmgr.h"
+#include "utils/builtins.h"
+#include "utils/tqual.h"
+
+
+PG_MODULE_MAGIC;
+
+PG_FUNCTION_INFO_V1(pgstattuple);
+PG_FUNCTION_INFO_V1(pgstattuplebyid);
+
+extern Datum pgstattuple(PG_FUNCTION_ARGS);
+extern Datum pgstattuplebyid(PG_FUNCTION_ARGS);
+
+/*
+ * struct pgstattuple_type
+ *
+ * tuple_percent, dead_tuple_percent and free_percent are computable,
+ * so not defined here.
+ */
+typedef struct pgstattuple_type
+{
+	uint64		table_len;
+	uint64		tuple_count;
+	uint64		tuple_len;
+	uint64		dead_tuple_count;
+	uint64		dead_tuple_len;
+	uint64		free_space;		/* free/reusable space in bytes */
+} pgstattuple_type;
+
+typedef void (*pgstat_page) (pgstattuple_type *, Relation, BlockNumber);
+
+static Datum build_pgstattuple_type(pgstattuple_type *stat,
+					   FunctionCallInfo fcinfo);
+static Datum pgstat_relation(Relation rel, FunctionCallInfo fcinfo);
+static Datum pgstat_heap(Relation rel, FunctionCallInfo fcinfo);
+static void pgstat_btree_page(pgstattuple_type *stat,
+				  Relation rel, BlockNumber blkno);
+static void pgstat_hash_page(pgstattuple_type *stat,
+				 Relation rel, BlockNumber blkno);
+static void pgstat_gist_page(pgstattuple_type *stat,
+				 Relation rel, BlockNumber blkno);
+static Datum pgstat_index(Relation rel, BlockNumber start,
+			 pgstat_page pagefn, FunctionCallInfo fcinfo);
+static void pgstat_index_page(pgstattuple_type *stat, Page page,
+				  OffsetNumber minoff, OffsetNumber maxoff);
+
+/*
+ * build_pgstattuple_type -- build a pgstattuple_type tuple
+ */
+static Datum
+build_pgstattuple_type(pgstattuple_type *stat, FunctionCallInfo fcinfo)
+{
+#define NCOLUMNS	9
+#define NCHARS		32
+
+	HeapTuple	tuple;
+	char	   *values[NCOLUMNS];
+	char		values_buf[NCOLUMNS][NCHARS];
+	int			i;
+	double		tuple_percent;
+	double		dead_tuple_percent;
+	double		free_percent;	/* free/reusable space in % */
+	TupleDesc	tupdesc;
+	AttInMetadata *attinmeta;
+
+	/* Build a tuple descriptor for our result type */
+	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+		elog(ERROR, "return type must be a row type");
+
+	/*
+	 * Generate attribute metadata needed later to produce tuples from raw C
+	 * strings
+	 */
+	attinmeta = TupleDescGetAttInMetadata(tupdesc);
+
+	if (stat->table_len == 0)
+	{
+		tuple_percent = 0.0;
+		dead_tuple_percent = 0.0;
+		free_percent = 0.0;
+	}
+	else
+	{
+		tuple_percent = 100.0 * stat->tuple_len / stat->table_len;
+		dead_tuple_percent = 100.0 * stat->dead_tuple_len / stat->table_len;
+		free_percent = 100.0 * stat->free_space / stat->table_len;
+	}
+
+	/*
+	 * Prepare a values array for constructing the tuple. This should be an
+	 * array of C strings which will be processed later by the appropriate
+	 * "in" functions.
+	 */
+	for (i = 0; i < NCOLUMNS; i++)
+		values[i] = values_buf[i];
+	i = 0;
+	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->table_len);
+	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_count);
+	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_len);
+	snprintf(values[i++], NCHARS, "%.2f", tuple_percent);
+	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_count);
+	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_len);
+	snprintf(values[i++], NCHARS, "%.2f", dead_tuple_percent);
+	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->free_space);
+	snprintf(values[i++], NCHARS, "%.2f", free_percent);
+
+	/* build a tuple */
+	tuple = BuildTupleFromCStrings(attinmeta, values);
+
+	/* make the tuple into a datum */
+	return HeapTupleGetDatum(tuple);
+}
+
+/* ----------
+ * pgstattuple:
+ * returns live/dead tuples info
+ *
+ * C FUNCTION definition
+ * pgstattuple(text) returns pgstattuple_type
+ * see pgstattuple.sql for pgstattuple_type
+ * ----------
+ */
+
+Datum
+pgstattuple(PG_FUNCTION_ARGS)
+{
+	text	   *relname = PG_GETARG_TEXT_P(0);
+	RangeVar   *relrv;
+	Relation	rel;
+
+	if (!superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 (errmsg("must be superuser to use pgstattuple functions"))));
+
+	/* open relation */
+	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+	rel = relation_openrv(relrv, AccessShareLock);
+
+	PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
+}
+
+Datum
+pgstattuplebyid(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	Relation	rel;
+
+	if (!superuser())
+		ereport(ERROR,
+				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+				 (errmsg("must be superuser to use pgstattuple functions"))));
+
+	/* open relation */
+	rel = relation_open(relid, AccessShareLock);
+
+	PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
+}
+
+/*
+ * pgstat_relation
+ */
+static Datum
+pgstat_relation(Relation rel, FunctionCallInfo fcinfo)
+{
+	const char *err;
+
+	/*
+	 * Reject attempts to read non-local temporary relations; we would be
+	 * likely to get wrong data since we have no visibility into the owning
+	 * session's local buffers.
+	 */
+	if (RELATION_IS_OTHER_TEMP(rel))
+		ereport(ERROR,
+				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+				 errmsg("cannot access temporary tables of other sessions")));
+
+	switch (rel->rd_rel->relkind)
+	{
+		case RELKIND_RELATION:
+		case RELKIND_TOASTVALUE:
+		case RELKIND_UNCATALOGED:
+		case RELKIND_SEQUENCE:
+			return pgstat_heap(rel, fcinfo);
+		case RELKIND_INDEX:
+			switch (rel->rd_rel->relam)
+			{
+				case BTREE_AM_OID:
+					return pgstat_index(rel, BTREE_METAPAGE + 1,
+										pgstat_btree_page, fcinfo);
+				case HASH_AM_OID:
+					return pgstat_index(rel, HASH_METAPAGE + 1,
+										pgstat_hash_page, fcinfo);
+				case GIST_AM_OID:
+					return pgstat_index(rel, GIST_ROOT_BLKNO + 1,
+										pgstat_gist_page, fcinfo);
+				case GIN_AM_OID:
+					err = "gin index";
+					break;
+				default:
+					err = "unknown index";
+					break;
+			}
+			break;
+		case RELKIND_VIEW:
+			err = "view";
+			break;
+		case RELKIND_COMPOSITE_TYPE:
+			err = "composite type";
+			break;
+		case RELKIND_FOREIGN_TABLE:
+			err = "foreign table";
+			break;
+		default:
+			err = "unknown";
+			break;
+	}
+
+	ereport(ERROR,
+			(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+			 errmsg("\"%s\" (%s) is not supported",
+					RelationGetRelationName(rel), err)));
+	return 0;					/* should not happen */
+}
+
+/*
+ * pgstat_heap -- returns live/dead tuples info in a heap
+ */
+static Datum
+pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
+{
+	HeapScanDesc scan;
+	HeapTuple	tuple;
+	BlockNumber nblocks;
+	BlockNumber block = 0;		/* next block to count free space in */
+	BlockNumber tupblock;
+	Buffer		buffer;
+	pgstattuple_type stat = {0};
+
+	/* Disable syncscan because we assume we scan from block zero upwards */
+	scan = heap_beginscan_strat(rel, SnapshotAny, 0, NULL, true, false);
+
+	nblocks = scan->rs_nblocks; /* # blocks to be scanned */
+
+	/* scan the relation */
+	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+	{
+		CHECK_FOR_INTERRUPTS();
+
+		/* must hold a buffer lock to call HeapTupleSatisfiesVisibility */
+		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
+
+		if (HeapTupleSatisfiesVisibility(tuple, SnapshotNow, scan->rs_cbuf))
+		{
+			stat.tuple_len += tuple->t_len;
+			stat.tuple_count++;
+		}
+		else
+		{
+			stat.dead_tuple_len += tuple->t_len;
+			stat.dead_tuple_count++;
+		}
+
+		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
+
+		/*
+		 * To avoid physically reading the table twice, try to do the
+		 * free-space scan in parallel with the heap scan.	However,
+		 * heap_getnext may find no tuples on a given page, so we cannot
+		 * simply examine the pages returned by the heap scan.
+		 */
+		tupblock = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);
+
+		while (block <= tupblock)
+		{
+			CHECK_FOR_INTERRUPTS();
+
+			buffer = ReadBuffer(rel, block);
+			LockBuffer(buffer, BUFFER_LOCK_SHARE);
+			stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
+			UnlockReleaseBuffer(buffer);
+			block++;
+		}
+	}
+	heap_endscan(scan);
+
+	while (block < nblocks)
+	{
+		CHECK_FOR_INTERRUPTS();
+
+		buffer = ReadBuffer(rel, block);
+		LockBuffer(buffer, BUFFER_LOCK_SHARE);
+		stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
+		UnlockReleaseBuffer(buffer);
+		block++;
+	}
+
+	relation_close(rel, AccessShareLock);
+
+	stat.table_len = (uint64) nblocks *BLCKSZ;
+
+	return build_pgstattuple_type(&stat, fcinfo);
+}
+
+/*
+ * pgstat_btree_page -- check tuples in a btree page
+ */
+static void
+pgstat_btree_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
+{
+	Buffer		buf;
+	Page		page;
+
+	buf = ReadBuffer(rel, blkno);
+	LockBuffer(buf, BT_READ);
+	page = BufferGetPage(buf);
+
+	/* Page is valid, see what to do with it */
+	if (PageIsNew(page))
+	{
+		/* fully empty page */
+		stat->free_space += BLCKSZ;
+	}
+	else
+	{
+		BTPageOpaque opaque;
+
+		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
+		if (opaque->btpo_flags & (BTP_DELETED | BTP_HALF_DEAD))
+		{
+			/* recyclable page */
+			stat->free_space += BLCKSZ;
+		}
+		else if (P_ISLEAF(opaque))
+		{
+			pgstat_index_page(stat, page, P_FIRSTDATAKEY(opaque),
+							  PageGetMaxOffsetNumber(page));
+		}
+		else
+		{
+			/* root or node */
+		}
+	}
+
+	_bt_relbuf(rel, buf);
+}
+
+/*
+ * pgstat_hash_page -- check tuples in a hash page
+ */
+static void
+pgstat_hash_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
+{
+	Buffer		buf;
+	Page		page;
+
+	_hash_getlock(rel, blkno, HASH_SHARE);
+	buf = _hash_getbuf(rel, blkno, HASH_READ, 0);
+	page = BufferGetPage(buf);
+
+	if (PageGetSpecialSize(page) == MAXALIGN(sizeof(HashPageOpaqueData)))
+	{
+		HashPageOpaque opaque;
+
+		opaque = (HashPageOpaque) PageGetSpecialPointer(page);
+		switch (opaque->hasho_flag)
+		{
+			case LH_UNUSED_PAGE:
+				stat->free_space += BLCKSZ;
+				break;
+			case LH_BUCKET_PAGE:
+			case LH_OVERFLOW_PAGE:
+				pgstat_index_page(stat, page, FirstOffsetNumber,
+								  PageGetMaxOffsetNumber(page));
+				break;
+			case LH_BITMAP_PAGE:
+			case LH_META_PAGE:
+			default:
+				break;
+		}
+	}
+	else
+	{
+		/* maybe corrupted */
+	}
+
+	_hash_relbuf(rel, buf);
+	_hash_droplock(rel, blkno, HASH_SHARE);
+}
+
+/*
+ * pgstat_gist_page -- check tuples in a gist page
+ */
+static void
+pgstat_gist_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
+{
+	Buffer		buf;
+	Page		page;
+
+	buf = ReadBuffer(rel, blkno);
+	LockBuffer(buf, GIST_SHARE);
+	gistcheckpage(rel, buf);
+	page = BufferGetPage(buf);
+
+	if (GistPageIsLeaf(page))
+	{
+		pgstat_index_page(stat, page, FirstOffsetNumber,
+						  PageGetMaxOffsetNumber(page));
+	}
+	else
+	{
+		/* root or node */
+	}
+
+	UnlockReleaseBuffer(buf);
+}
+
+/*
+ * pgstat_index -- returns live/dead tuples info in a generic index
+ */
+static Datum
+pgstat_index(Relation rel, BlockNumber start, pgstat_page pagefn,
+			 FunctionCallInfo fcinfo)
+{
+	BlockNumber nblocks;
+	BlockNumber blkno;
+	pgstattuple_type stat = {0};
+
+	blkno = start;
+	for (;;)
+	{
+		/* Get the current relation length */
+		LockRelationForExtension(rel, ExclusiveLock);
+		nblocks = RelationGetNumberOfBlocks(rel);
+		UnlockRelationForExtension(rel, ExclusiveLock);
+
+		/* Quit if we've scanned the whole relation */
+		if (blkno >= nblocks)
+		{
+			stat.table_len = (uint64) nblocks *BLCKSZ;
+
+			break;
+		}
+
+		for (; blkno < nblocks; blkno++)
+		{
+			CHECK_FOR_INTERRUPTS();
+
+			pagefn(&stat, rel, blkno);
+		}
+	}
+
+	relation_close(rel, AccessShareLock);
+
+	return build_pgstattuple_type(&stat, fcinfo);
+}
+
+/*
+ * pgstat_index_page -- for generic index page
+ */
+static void
+pgstat_index_page(pgstattuple_type *stat, Page page,
+				  OffsetNumber minoff, OffsetNumber maxoff)
+{
+	OffsetNumber i;
+
+	stat->free_space += PageGetFreeSpace(page);
+
+	for (i = minoff; i <= maxoff; i = OffsetNumberNext(i))
+	{
+		ItemId		itemid = PageGetItemId(page, i);
+
+		if (ItemIdIsDead(itemid))
+		{
+			stat->dead_tuple_count++;
+			stat->dead_tuple_len += ItemIdGetLength(itemid);
+		}
+		else
+		{
+			stat->tuple_count++;
+			stat->tuple_len += ItemIdGetLength(itemid);
+		}
+	}
+}
diff --git a/src/extension/pgstattuple/pgstattuple.control b/src/extension/pgstattuple/pgstattuple.control
new file mode 100644
index 0000000..7b5129b
--- /dev/null
+++ b/src/extension/pgstattuple/pgstattuple.control
@@ -0,0 +1,5 @@
+# pgstattuple extension
+comment = 'show tuple-level statistics'
+default_version = '1.0'
+module_pathname = '$libdir/pgstattuple'
+relocatable = true
#2Vinicius Abrahao
vinnix.bsd@gmail.com
In reply to: Greg Smith (#1)
Re: Core Extensions relocation

Hello Greg, hello All,

This is my first post at Hackers, so sorry if I am been a noob here, but I
am pretty confused about
how to create the extension pg_buffercache.

First of all, I was trying to create using the old method by calling the
pg_buffercache--1.0.sql directly.
Then I discover the change that occurs recently to use CREATE EXTENSION, but
even now I am getting the weird error:

# select * from pg_available_extensions;
name | default_version | installed_version | comment
----------------+-----------------+-------------------+---------------------------------
plpgsql | 1.0 | 1.0 | PL/pgSQL
procedural language
pg_buffercache | 1.0 | | examine the
shared buffer cache
(2 rows)

postgres=# CREATE EXTENSION pg_buffercache SCHEMA pg_catalog;
ERROR: syntax error at or near "NO"

Right now, talking with some fellows at #postgresql they tell that the error
is NOT occurring for they.
This was about 9.1beta from git.

But even so, I need to ask, because my production is on another versions:

What is the right way to install this contrib at 9.0.1, 9.0.2 and 9.0.4 ?

Many thanks,

Best regards,
vinnix

On Thu, Jun 9, 2011 at 1:14 AM, Greg Smith <greg@2ndquadrant.com> wrote:

Following up on the idea we've been exploring for making some extensions
more prominent, attached is the first rev that I think may be worth
considering seriously. Main improvement from the last is that I reorganized
the docs to break out what I decided to tentatively name "Core Extensions"
into their own chapter. No longer mixed in with the rest of the contrib
modules, and I introduce them a bit differently. If you want to take a
quick look at the new page, I copied it to
http://www.2ndquadrant.us/docs/html/extensions.html

I'm not completely happy on the wordering there yet. The use of both
"modules" and "extensions" is probably worth eliminating, and maybe that
continues on to doing that against the language I swiped from the contrib
intro too. There's also a lot of shared text at the end there, common
wording from that and the contrib page about how to install and migrate
these extensions. Not sure how to refactor it out into another section
cleanly though.

Regression tests came up last time I posted this. Doesn't look like there
are any for the modules I'm suggesting should be promoted. Only code issue
I noticed during another self-review here is that I didn't rename
contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql cleanly, may need to do
that one over again to get the commits as clean as possible.

Updated code is at
https://github.com/greg2ndQuadrant/postgres/tree/move-contrib too, and
since this is painful as a patch the compare view at
https://github.com/greg2ndQuadrant/postgres/compare/master...move-contribwill be easier for browsing the code changes.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

--

Vinícius Abrahão Bazana Schmidt
Desenvolvimento
Dextra Sistemas
www.dextra.com.br
+55 19 3256-6722 Ramal 246

Este email é confidencial. Mais informações em:
This message is confidential. More information at:
www.dextra.com.br/confidencial.htm

--
vi[nnix]™
aka: Vinícius Abrahão Bazana Schmidt
vischmidt.wordpress.com
twitter.com/vischmidt

#3Greg Smith
greg@2ndquadrant.com
In reply to: Vinicius Abrahao (#2)
Re: Core Extensions relocation

Vinicius Abrahao wrote:

This is my first post at Hackers, so sorry if I am been a noob here,
but I am pretty confused about
how to create the extension pg_buffercache.

This list is for talking about development of new features, normally on
the latest development version of the software (right now 9.1). There
is no such thing as CREATE EXTENSION in versions before that. A
question like "how do I install pg_buffercache for 9.0?" should normally
get sent to one of the other mailing lists; any of pgsql-performance,
pgsql-admin, or pgsql-general would be appropriate to ask that at. This
one really isn't. It's also better to avoid taking someone else's
discussion and replying to it with your questions.

But even so, I need to ask, because my production is on another versions:
What is the right way to install this contrib at 9.0.1, 9.0.2 and 9.0.4 ?

But since I happen to know this answer, here's an example from a RedHat
derived Linux system running PostgreSQL 9.0.4, logged in as the postgres
user:

-bash-3.2$ locate pg_buffercache.sql
/usr/pgsql-9.0/share/contrib/pg_buffercache.sql
/usr/pgsql-9.0/share/contrib/uninstall_pg_buffercache.sql
-bash-3.2$ psql -d pgbench -f
/usr/pgsql-9.0/share/contrib/pg_buffercache.sql
SET
CREATE FUNCTION
CREATE VIEW
REVOKE
REVOKE
-bash-3.2$ psql -d pgbench -c "select count(*) from pg_buffercache"
count
-------
4096

The location of the file will be different on other platforms, but
that's the basic idea of how you install it.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

#4Tom Lane
tgl@sss.pgh.pa.us
In reply to: Vinicius Abrahao (#2)
Re: Core Extensions relocation

Please do not piggyback on an unrelated thread to ask a question.
Start a new thread.

Vinicius Abrahao <vinnix.bsd@gmail.com> writes:

postgres=# CREATE EXTENSION pg_buffercache SCHEMA pg_catalog;
ERROR: syntax error at or near "NO"

This looks like a syntax error in the pg_buffercache--1.0.sql file ...
have you tampered with that at all?

I believe BTW that you cannot specify pg_catalog as the target schema
here. When I try that, I get:

regression=# CREATE EXTENSION pg_buffercache SCHEMA pg_catalog;
ERROR: permission denied to create "pg_catalog.pg_buffercache"
DETAIL: System catalog modifications are currently disallowed.

but it goes through fine without the SCHEMA clause.

regards, tom lane

#5Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Greg Smith (#1)
Re: Core Extensions relocation

Hi,

Greg Smith <greg@2ndquadrant.com> writes:

Following up on the idea we've been exploring for making some extensions
more prominent, attached is the first rev that I think may be worth
considering seriously. Main improvement from the last is that I reorganized
the docs to break out what I decided to tentatively name "Core Extensions"
into their own chapter. No longer mixed in with the rest of the contrib
modules, and I introduce them a bit differently. If you want to take a
quick look at the new page, I copied it to
http://www.2ndquadrant.us/docs/html/extensions.html

Thanks a lot for working on this!

I have two remarks here. First, I think that the “core extensions” (+1
for this naming) should not be found in a documentation appendix, but in
the main documentation, as a new Chapter in Part II where we list data
types and operators and system functions. Between current chapters 9
and 10 would be my vote.

Then, I think the angle to use to present “core extensions” would be
that those are things maintained like the core server, but that you
might not need at all. There's no SQL level feature that rely on them,
it's all “extra”, but it's there nonetheless, because you might need it.

Other than that, +1 for your patch. I'd stress out that I support your
idea to split the extensions at the source level, I think that's really
helping to get the message out: that is trustworthy and maintained code.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

#6Peter Eisentraut
peter_e@gmx.net
In reply to: Greg Smith (#1)
Re: Core Extensions relocation

On tor, 2011-06-09 at 00:14 -0400, Greg Smith wrote:

Following up on the idea we've been exploring for making some
extensions
more prominent, attached is the first rev that I think may be worth
considering seriously. Main improvement from the last is that I
reorganized the docs to break out what I decided to tentatively name
"Core Extensions" into their own chapter. No longer mixed in with
the
rest of the contrib modules, and I introduce them a bit
differently.

For the directory name, I'd prefer either src/extensions (since there is
more than one), or if you want to go for short somehow, src/ext. (Hmm,
I guess the installation subdirectory is also called "extension". But
it felt wrong on first reading anyway.)

There is some funny business in your new src/extension/Makefile. You
apparently based this on a very old version of contrib/Makefile (if
still contains a CVS keyword header), it uses for loops in make targets
after we just got rid of them, and it references some modules that
aren't there at all. That file needs a complete redo based on current
sources, I think.

Equally, your new extension-global.mk sets MODULEDIR, which is no longer
necessary, and has a CVS header. What version did you branch this
off? :)

Perhaps a small addition to the installation instructions would also be
appropriate, to tell people that certain core extensions, as it were,
are installed by default.

#7Greg Smith
greg@2ndquadrant.com
In reply to: Peter Eisentraut (#6)
Re: Core Extensions relocation

Peter Eisentraut wrote:

For the directory name, I'd prefer either src/extensions (since there is
more than one), or if you want to go for short somehow, src/ext. (Hmm,
I guess the installation subdirectory is also called "extension". But
it felt wrong on first reading anyway.)

I jumped between those two a couple of times myself, settling on
"extension" to match the installation location as you figured out.
Assuming that name shouldn't change at this point, this seemed the best
way to name the new directory, even though I agree it seems weird at first.

What version did you branch this off? :)

Long enough ago that apparently I've missed some major changes; Magnus
already pointed out I needed to revisit how MODULEDIR was used. Looks
like I need to rebuild the first patch in this series yet again, which
shouldn't be too bad. The second time I did that, I made the commits
atomic enough that the inevitable third one would be easy.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

#8Robert Haas
robertmhaas@gmail.com
In reply to: Greg Smith (#7)
Re: Core Extensions relocation

On Sat, Jun 11, 2011 at 12:38 PM, Greg Smith <greg@2ndquadrant.com> wrote:

Peter Eisentraut wrote:

For the directory name, I'd prefer either src/extensions (since there is
more than one), or if you want to go for short somehow, src/ext.  (Hmm,
I guess the installation subdirectory is also called "extension".  But
it felt wrong on first reading anyway.)

I jumped between those two a couple of times myself, settling on "extension"
to match the installation location as you figured out.  Assuming that name
shouldn't change at this point, this seemed the best way to name the new
directory, even though I agree it seems weird at first.

What version did you branch this off? :)

Long enough ago that apparently I've missed some major changes; Magnus
already pointed out I needed to revisit how MODULEDIR was used.  Looks like
I need to rebuild the first patch in this series yet again, which shouldn't
be too bad.  The second time I did that, I made the commits atomic enough
that the inevitable third one would be easy.

Are you going to do this work for this CommitFest?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#9Bruce Momjian
bruce@momjian.us
In reply to: Greg Smith (#1)
Re: Core Extensions relocation

Is this going to be done for 9.2?

---------------------------------------------------------------------------

Greg Smith wrote:

Following up on the idea we've been exploring for making some extensions
more prominent, attached is the first rev that I think may be worth
considering seriously. Main improvement from the last is that I
reorganized the docs to break out what I decided to tentatively name
"Core Extensions" into their own chapter. No longer mixed in with the
rest of the contrib modules, and I introduce them a bit differently.
If you want to take a quick look at the new page, I copied it to
http://www.2ndquadrant.us/docs/html/extensions.html

I'm not completely happy on the wordering there yet. The use of both
"modules" and "extensions" is probably worth eliminating, and maybe that
continues on to doing that against the language I swiped from the
contrib intro too. There's also a lot of shared text at the end there,
common wording from that and the contrib page about how to install and
migrate these extensions. Not sure how to refactor it out into another
section cleanly though.

Regression tests came up last time I posted this. Doesn't look like
there are any for the modules I'm suggesting should be promoted. Only
code issue I noticed during another self-review here is that I didn't
rename contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql cleanly, may
need to do that one over again to get the commits as clean as possible.

Updated code is at
https://github.com/greg2ndQuadrant/postgres/tree/move-contrib too, and
since this is painful as a patch the compare view at
https://github.com/greg2ndQuadrant/postgres/compare/master...move-contrib
will be easier for browsing the code changes.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#10Thom Brown
thom@linux.com
In reply to: Bruce Momjian (#9)
Re: Core Extensions relocation

On 14 October 2011 17:48, Bruce Momjian <bruce@momjian.us> wrote:

Is this going to be done for 9.2?

---------------------------------------------------------------------------

I didn't spot this thread before. I posted something related
yesterday: http://archives.postgresql.org/pgsql-hackers/2011-10/msg00781.php

--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#11Greg Smith
greg@2ndQuadrant.com
In reply to: Bruce Momjian (#9)
Re: Core Extensions relocation

On 10/14/2011 01:48 PM, Bruce Momjian wrote:

Is this going to be done for 9.2?

Refreshing this patch is on my list of things to finish before the next
CommitFest starts later this month.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

#12Josh Berkus
josh@agliodbs.com
In reply to: Greg Smith (#11)
Re: Core Extensions relocation

On 11/2/11 8:25 AM, Greg Smith wrote:

On 10/14/2011 01:48 PM, Bruce Momjian wrote:

Is this going to be done for 9.2?

Refreshing this patch is on my list of things to finish before the next
CommitFest starts later this month.

Put me down as reviewer.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

#13Greg Smith
greg@2ndQuadrant.com
In reply to: Greg Smith (#11)
1 attachment(s)
Re: Core Extensions relocation

I've revived the corpose of the patch submitted in May, now that it's a
much less strange time of the development cycle to consider it.
http://archives.postgresql.org/message-id/4DF048BD.8040302@2ndquadrant.com
was the first attempt to move some extensions from contrib/ to a new
src/extension/ directory. I have fixed the main complaints from the
last submit attempt, that I accidentally grabbed some old makesfiles and
CVS junk. The new attempt is attached, and is easiest to follow with
the a diff view that understands "moved a file", like github's:
https://github.com/greg2ndQuadrant/postgres/compare/master...core-extensions

You can also check out the docs changes done so far at
http://www.highperfpostgres.com/docs/html/extensions.html I reorganized
the docs to break out what I decided to tentatively name "Core
Extensions" into their own chapter. They're no longer mixed in with the
rest of the contrib modules, and I introduce them a bit differently.
I'm not completely happy on the wordering there yet. The use of both
"modules" and "extensions" is probably worth eliminating, and maybe that
continues on to doing that against the language I swiped from the
contrib intro too. There's also a lot of shared text at the end there,
common wording from that and the contrib page about how to install and
migrate these extensions. Not sure how to refactor it out into another
section cleanly though.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

Attachments:

core-extensions-v2.patchtext/x-patch; name=core-extensions-v2.patchDownload
diff --git a/contrib/Makefile b/contrib/Makefile
index 0c238aa..f43a8d3 100644
*** a/contrib/Makefile
--- b/contrib/Makefile
*************** include $(top_builddir)/src/Makefile.glo
*** 7,13 ****
  SUBDIRS = \
  		adminpack	\
  		auth_delay	\
- 		auto_explain	\
  		btree_gin	\
  		btree_gist	\
  		chkpass		\
--- 7,12 ----
*************** SUBDIRS = \
*** 27,47 ****
  		lo		\
  		ltree		\
  		oid2name	\
- 		pageinspect	\
  		passwordcheck	\
  		pg_archivecleanup \
- 		pg_buffercache	\
- 		pg_freespacemap \
  		pg_standby	\
- 		pg_stat_statements \
  		pg_test_fsync	\
  		pg_trgm		\
  		pg_upgrade	\
  		pg_upgrade_support \
  		pgbench		\
  		pgcrypto	\
- 		pgrowlocks	\
- 		pgstattuple	\
  		seg		\
  		spi		\
  		tablefunc	\
--- 26,40 ----
diff --git a/contrib/auto_explain/Makefile b/contrib/auto_explain/Makefile
index 2d1443f..e69de29 100644
*** a/contrib/auto_explain/Makefile
--- b/contrib/auto_explain/Makefile
***************
*** 1,15 ****
- # contrib/auto_explain/Makefile
- 
- MODULE_big = auto_explain
- OBJS = auto_explain.o
- 
- ifdef USE_PGXS
- PG_CONFIG = pg_config
- PGXS := $(shell $(PG_CONFIG) --pgxs)
- include $(PGXS)
- else
- subdir = contrib/auto_explain
- top_builddir = ../..
- include $(top_builddir)/src/Makefile.global
- include $(top_srcdir)/contrib/contrib-global.mk
- endif
--- 0 ----
diff --git a/contrib/auto_explain/auto_explain.c b/contrib/auto_explain/auto_explain.c
index b320698..e69de29 100644
*** a/contrib/auto_explain/auto_explain.c
--- b/contrib/auto_explain/auto_explain.c
***************
*** 1,304 ****
- /*-------------------------------------------------------------------------
-  *
-  * auto_explain.c
-  *
-  *
-  * Copyright (c) 2008-2011, PostgreSQL Global Development Group
-  *
-  * IDENTIFICATION
-  *	  contrib/auto_explain/auto_explain.c
-  *
-  *-------------------------------------------------------------------------
-  */
- #include "postgres.h"
- 
- #include "commands/explain.h"
- #include "executor/instrument.h"
- #include "utils/guc.h"
- 
- PG_MODULE_MAGIC;
- 
- /* GUC variables */
- static int	auto_explain_log_min_duration = -1; /* msec or -1 */
- static bool auto_explain_log_analyze = false;
- static bool auto_explain_log_verbose = false;
- static bool auto_explain_log_buffers = false;
- static int	auto_explain_log_format = EXPLAIN_FORMAT_TEXT;
- static bool auto_explain_log_nested_statements = false;
- 
- static const struct config_enum_entry format_options[] = {
- 	{"text", EXPLAIN_FORMAT_TEXT, false},
- 	{"xml", EXPLAIN_FORMAT_XML, false},
- 	{"json", EXPLAIN_FORMAT_JSON, false},
- 	{"yaml", EXPLAIN_FORMAT_YAML, false},
- 	{NULL, 0, false}
- };
- 
- /* Current nesting depth of ExecutorRun calls */
- static int	nesting_level = 0;
- 
- /* Saved hook values in case of unload */
- static ExecutorStart_hook_type prev_ExecutorStart = NULL;
- static ExecutorRun_hook_type prev_ExecutorRun = NULL;
- static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
- static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
- 
- #define auto_explain_enabled() \
- 	(auto_explain_log_min_duration >= 0 && \
- 	 (nesting_level == 0 || auto_explain_log_nested_statements))
- 
- void		_PG_init(void);
- void		_PG_fini(void);
- 
- static void explain_ExecutorStart(QueryDesc *queryDesc, int eflags);
- static void explain_ExecutorRun(QueryDesc *queryDesc,
- 					ScanDirection direction,
- 					long count);
- static void explain_ExecutorFinish(QueryDesc *queryDesc);
- static void explain_ExecutorEnd(QueryDesc *queryDesc);
- 
- 
- /*
-  * Module load callback
-  */
- void
- _PG_init(void)
- {
- 	/* Define custom GUC variables. */
- 	DefineCustomIntVariable("auto_explain.log_min_duration",
- 		 "Sets the minimum execution time above which plans will be logged.",
- 						 "Zero prints all plans. -1 turns this feature off.",
- 							&auto_explain_log_min_duration,
- 							-1,
- 							-1, INT_MAX / 1000,
- 							PGC_SUSET,
- 							GUC_UNIT_MS,
- 							NULL,
- 							NULL,
- 							NULL);
- 
- 	DefineCustomBoolVariable("auto_explain.log_analyze",
- 							 "Use EXPLAIN ANALYZE for plan logging.",
- 							 NULL,
- 							 &auto_explain_log_analyze,
- 							 false,
- 							 PGC_SUSET,
- 							 0,
- 							 NULL,
- 							 NULL,
- 							 NULL);
- 
- 	DefineCustomBoolVariable("auto_explain.log_verbose",
- 							 "Use EXPLAIN VERBOSE for plan logging.",
- 							 NULL,
- 							 &auto_explain_log_verbose,
- 							 false,
- 							 PGC_SUSET,
- 							 0,
- 							 NULL,
- 							 NULL,
- 							 NULL);
- 
- 	DefineCustomBoolVariable("auto_explain.log_buffers",
- 							 "Log buffers usage.",
- 							 NULL,
- 							 &auto_explain_log_buffers,
- 							 false,
- 							 PGC_SUSET,
- 							 0,
- 							 NULL,
- 							 NULL,
- 							 NULL);
- 
- 	DefineCustomEnumVariable("auto_explain.log_format",
- 							 "EXPLAIN format to be used for plan logging.",
- 							 NULL,
- 							 &auto_explain_log_format,
- 							 EXPLAIN_FORMAT_TEXT,
- 							 format_options,
- 							 PGC_SUSET,
- 							 0,
- 							 NULL,
- 							 NULL,
- 							 NULL);
- 
- 	DefineCustomBoolVariable("auto_explain.log_nested_statements",
- 							 "Log nested statements.",
- 							 NULL,
- 							 &auto_explain_log_nested_statements,
- 							 false,
- 							 PGC_SUSET,
- 							 0,
- 							 NULL,
- 							 NULL,
- 							 NULL);
- 
- 	EmitWarningsOnPlaceholders("auto_explain");
- 
- 	/* Install hooks. */
- 	prev_ExecutorStart = ExecutorStart_hook;
- 	ExecutorStart_hook = explain_ExecutorStart;
- 	prev_ExecutorRun = ExecutorRun_hook;
- 	ExecutorRun_hook = explain_ExecutorRun;
- 	prev_ExecutorFinish = ExecutorFinish_hook;
- 	ExecutorFinish_hook = explain_ExecutorFinish;
- 	prev_ExecutorEnd = ExecutorEnd_hook;
- 	ExecutorEnd_hook = explain_ExecutorEnd;
- }
- 
- /*
-  * Module unload callback
-  */
- void
- _PG_fini(void)
- {
- 	/* Uninstall hooks. */
- 	ExecutorStart_hook = prev_ExecutorStart;
- 	ExecutorRun_hook = prev_ExecutorRun;
- 	ExecutorFinish_hook = prev_ExecutorFinish;
- 	ExecutorEnd_hook = prev_ExecutorEnd;
- }
- 
- /*
-  * ExecutorStart hook: start up logging if needed
-  */
- static void
- explain_ExecutorStart(QueryDesc *queryDesc, int eflags)
- {
- 	if (auto_explain_enabled())
- 	{
- 		/* Enable per-node instrumentation iff log_analyze is required. */
- 		if (auto_explain_log_analyze && (eflags & EXEC_FLAG_EXPLAIN_ONLY) == 0)
- 		{
- 			queryDesc->instrument_options |= INSTRUMENT_TIMER;
- 			if (auto_explain_log_buffers)
- 				queryDesc->instrument_options |= INSTRUMENT_BUFFERS;
- 		}
- 	}
- 
- 	if (prev_ExecutorStart)
- 		prev_ExecutorStart(queryDesc, eflags);
- 	else
- 		standard_ExecutorStart(queryDesc, eflags);
- 
- 	if (auto_explain_enabled())
- 	{
- 		/*
- 		 * Set up to track total elapsed time in ExecutorRun.  Make sure the
- 		 * space is allocated in the per-query context so it will go away at
- 		 * ExecutorEnd.
- 		 */
- 		if (queryDesc->totaltime == NULL)
- 		{
- 			MemoryContext oldcxt;
- 
- 			oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
- 			queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
- 			MemoryContextSwitchTo(oldcxt);
- 		}
- 	}
- }
- 
- /*
-  * ExecutorRun hook: all we need do is track nesting depth
-  */
- static void
- explain_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
- {
- 	nesting_level++;
- 	PG_TRY();
- 	{
- 		if (prev_ExecutorRun)
- 			prev_ExecutorRun(queryDesc, direction, count);
- 		else
- 			standard_ExecutorRun(queryDesc, direction, count);
- 		nesting_level--;
- 	}
- 	PG_CATCH();
- 	{
- 		nesting_level--;
- 		PG_RE_THROW();
- 	}
- 	PG_END_TRY();
- }
- 
- /*
-  * ExecutorFinish hook: all we need do is track nesting depth
-  */
- static void
- explain_ExecutorFinish(QueryDesc *queryDesc)
- {
- 	nesting_level++;
- 	PG_TRY();
- 	{
- 		if (prev_ExecutorFinish)
- 			prev_ExecutorFinish(queryDesc);
- 		else
- 			standard_ExecutorFinish(queryDesc);
- 		nesting_level--;
- 	}
- 	PG_CATCH();
- 	{
- 		nesting_level--;
- 		PG_RE_THROW();
- 	}
- 	PG_END_TRY();
- }
- 
- /*
-  * ExecutorEnd hook: log results if needed
-  */
- static void
- explain_ExecutorEnd(QueryDesc *queryDesc)
- {
- 	if (queryDesc->totaltime && auto_explain_enabled())
- 	{
- 		double		msec;
- 
- 		/*
- 		 * Make sure stats accumulation is done.  (Note: it's okay if several
- 		 * levels of hook all do this.)
- 		 */
- 		InstrEndLoop(queryDesc->totaltime);
- 
- 		/* Log plan if duration is exceeded. */
- 		msec = queryDesc->totaltime->total * 1000.0;
- 		if (msec >= auto_explain_log_min_duration)
- 		{
- 			ExplainState es;
- 
- 			ExplainInitState(&es);
- 			es.analyze = (queryDesc->instrument_options && auto_explain_log_analyze);
- 			es.verbose = auto_explain_log_verbose;
- 			es.buffers = (es.analyze && auto_explain_log_buffers);
- 			es.format = auto_explain_log_format;
- 
- 			ExplainBeginOutput(&es);
- 			ExplainQueryText(&es, queryDesc);
- 			ExplainPrintPlan(&es, queryDesc);
- 			ExplainEndOutput(&es);
- 
- 			/* Remove last line break */
- 			if (es.str->len > 0 && es.str->data[es.str->len - 1] == '\n')
- 				es.str->data[--es.str->len] = '\0';
- 
- 			/*
- 			 * Note: we rely on the existing logging of context or
- 			 * debug_query_string to identify just which statement is being
- 			 * reported.  This isn't ideal but trying to do it here would
- 			 * often result in duplication.
- 			 */
- 			ereport(LOG,
- 					(errmsg("duration: %.3f ms  plan:\n%s",
- 							msec, es.str->data),
- 					 errhidestmt(true)));
- 
- 			pfree(es.str->data);
- 		}
- 	}
- 
- 	if (prev_ExecutorEnd)
- 		prev_ExecutorEnd(queryDesc);
- 	else
- 		standard_ExecutorEnd(queryDesc);
- }
--- 0 ----
diff --git a/contrib/pageinspect/Makefile b/contrib/pageinspect/Makefile
index 13ba6d3..e69de29 100644
*** a/contrib/pageinspect/Makefile
--- b/contrib/pageinspect/Makefile
***************
*** 1,18 ****
- # contrib/pageinspect/Makefile
- 
- MODULE_big	= pageinspect
- OBJS		= rawpage.o heapfuncs.o btreefuncs.o fsmfuncs.o
- 
- EXTENSION = pageinspect
- DATA = pageinspect--1.0.sql pageinspect--unpackaged--1.0.sql
- 
- ifdef USE_PGXS
- PG_CONFIG = pg_config
- PGXS := $(shell $(PG_CONFIG) --pgxs)
- include $(PGXS)
- else
- subdir = contrib/pageinspect
- top_builddir = ../..
- include $(top_builddir)/src/Makefile.global
- include $(top_srcdir)/contrib/contrib-global.mk
- endif
--- 0 ----
diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
index dbb2158..e69de29 100644
*** a/contrib/pageinspect/btreefuncs.c
--- b/contrib/pageinspect/btreefuncs.c
***************
*** 1,500 ****
- /*
-  * contrib/pageinspect/btreefuncs.c
-  *
-  *
-  * btreefuncs.c
-  *
-  * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
-  *
-  * Permission to use, copy, modify, and distribute this software and
-  * its documentation for any purpose, without fee, and without a
-  * written agreement is hereby granted, provided that the above
-  * copyright notice and this paragraph and the following two
-  * paragraphs appear in all copies.
-  *
-  * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
-  * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
-  * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
-  * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
-  * OF THE POSSIBILITY OF SUCH DAMAGE.
-  *
-  * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
-  * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-  * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
-  * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
-  * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
-  */
- 
- #include "postgres.h"
- 
- #include "access/nbtree.h"
- #include "catalog/namespace.h"
- #include "funcapi.h"
- #include "miscadmin.h"
- #include "utils/builtins.h"
- #include "utils/rel.h"
- 
- 
- extern Datum bt_metap(PG_FUNCTION_ARGS);
- extern Datum bt_page_items(PG_FUNCTION_ARGS);
- extern Datum bt_page_stats(PG_FUNCTION_ARGS);
- 
- PG_FUNCTION_INFO_V1(bt_metap);
- PG_FUNCTION_INFO_V1(bt_page_items);
- PG_FUNCTION_INFO_V1(bt_page_stats);
- 
- #define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
- #define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
- 
- #define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
- 		if ( !(FirstOffsetNumber <= (offnum) && \
- 						(offnum) <= PageGetMaxOffsetNumber(pg)) ) \
- 			 elog(ERROR, "page offset number out of range"); }
- 
- /* note: BlockNumber is unsigned, hence can't be negative */
- #define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
- 		if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
- 			 elog(ERROR, "block number out of range"); }
- 
- /* ------------------------------------------------
-  * structure for single btree page statistics
-  * ------------------------------------------------
-  */
- typedef struct BTPageStat
- {
- 	uint32		blkno;
- 	uint32		live_items;
- 	uint32		dead_items;
- 	uint32		page_size;
- 	uint32		max_avail;
- 	uint32		free_size;
- 	uint32		avg_item_size;
- 	char		type;
- 
- 	/* opaque data */
- 	BlockNumber btpo_prev;
- 	BlockNumber btpo_next;
- 	union
- 	{
- 		uint32		level;
- 		TransactionId xact;
- 	}			btpo;
- 	uint16		btpo_flags;
- 	BTCycleId	btpo_cycleid;
- } BTPageStat;
- 
- 
- /* -------------------------------------------------
-  * GetBTPageStatistics()
-  *
-  * Collect statistics of single b-tree page
-  * -------------------------------------------------
-  */
- static void
- GetBTPageStatistics(BlockNumber blkno, Buffer buffer, BTPageStat *stat)
- {
- 	Page		page = BufferGetPage(buffer);
- 	PageHeader	phdr = (PageHeader) page;
- 	OffsetNumber maxoff = PageGetMaxOffsetNumber(page);
- 	BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);
- 	int			item_size = 0;
- 	int			off;
- 
- 	stat->blkno = blkno;
- 
- 	stat->max_avail = BLCKSZ - (BLCKSZ - phdr->pd_special + SizeOfPageHeaderData);
- 
- 	stat->dead_items = stat->live_items = 0;
- 
- 	stat->page_size = PageGetPageSize(page);
- 
- 	/* page type (flags) */
- 	if (P_ISDELETED(opaque))
- 	{
- 		stat->type = 'd';
- 		stat->btpo.xact = opaque->btpo.xact;
- 		return;
- 	}
- 	else if (P_IGNORE(opaque))
- 		stat->type = 'e';
- 	else if (P_ISLEAF(opaque))
- 		stat->type = 'l';
- 	else if (P_ISROOT(opaque))
- 		stat->type = 'r';
- 	else
- 		stat->type = 'i';
- 
- 	/* btpage opaque data */
- 	stat->btpo_prev = opaque->btpo_prev;
- 	stat->btpo_next = opaque->btpo_next;
- 	stat->btpo.level = opaque->btpo.level;
- 	stat->btpo_flags = opaque->btpo_flags;
- 	stat->btpo_cycleid = opaque->btpo_cycleid;
- 
- 	/* count live and dead tuples, and free space */
- 	for (off = FirstOffsetNumber; off <= maxoff; off++)
- 	{
- 		IndexTuple	itup;
- 
- 		ItemId		id = PageGetItemId(page, off);
- 
- 		itup = (IndexTuple) PageGetItem(page, id);
- 
- 		item_size += IndexTupleSize(itup);
- 
- 		if (!ItemIdIsDead(id))
- 			stat->live_items++;
- 		else
- 			stat->dead_items++;
- 	}
- 	stat->free_size = PageGetFreeSpace(page);
- 
- 	if ((stat->live_items + stat->dead_items) > 0)
- 		stat->avg_item_size = item_size / (stat->live_items + stat->dead_items);
- 	else
- 		stat->avg_item_size = 0;
- }
- 
- /* -----------------------------------------------
-  * bt_page()
-  *
-  * Usage: SELECT * FROM bt_page('t1_pkey', 1);
-  * -----------------------------------------------
-  */
- Datum
- bt_page_stats(PG_FUNCTION_ARGS)
- {
- 	text	   *relname = PG_GETARG_TEXT_P(0);
- 	uint32		blkno = PG_GETARG_UINT32(1);
- 	Buffer		buffer;
- 	Relation	rel;
- 	RangeVar   *relrv;
- 	Datum		result;
- 	HeapTuple	tuple;
- 	TupleDesc	tupleDesc;
- 	int			j;
- 	char	   *values[11];
- 	BTPageStat	stat;
- 
- 	if (!superuser())
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- 				 (errmsg("must be superuser to use pageinspect functions"))));
- 
- 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
- 	rel = relation_openrv(relrv, AccessShareLock);
- 
- 	if (!IS_INDEX(rel) || !IS_BTREE(rel))
- 		elog(ERROR, "relation \"%s\" is not a btree index",
- 			 RelationGetRelationName(rel));
- 
- 	/*
- 	 * Reject attempts to read non-local temporary relations; we would be
- 	 * likely to get wrong data since we have no visibility into the owning
- 	 * session's local buffers.
- 	 */
- 	if (RELATION_IS_OTHER_TEMP(rel))
- 		ereport(ERROR,
- 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- 				 errmsg("cannot access temporary tables of other sessions")));
- 
- 	if (blkno == 0)
- 		elog(ERROR, "block 0 is a meta page");
- 
- 	CHECK_RELATION_BLOCK_RANGE(rel, blkno);
- 
- 	buffer = ReadBuffer(rel, blkno);
- 
- 	/* keep compiler quiet */
- 	stat.btpo_prev = stat.btpo_next = InvalidBlockNumber;
- 	stat.btpo_flags = stat.free_size = stat.avg_item_size = 0;
- 
- 	GetBTPageStatistics(blkno, buffer, &stat);
- 
- 	/* Build a tuple descriptor for our result type */
- 	if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
- 		elog(ERROR, "return type must be a row type");
- 
- 	j = 0;
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", stat.blkno);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%c", stat.type);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", stat.live_items);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", stat.dead_items);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", stat.avg_item_size);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", stat.page_size);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", stat.free_size);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", stat.btpo_prev);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", stat.btpo_next);
- 	values[j] = palloc(32);
- 	if (stat.type == 'd')
- 		snprintf(values[j++], 32, "%d", stat.btpo.xact);
- 	else
- 		snprintf(values[j++], 32, "%d", stat.btpo.level);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", stat.btpo_flags);
- 
- 	tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
- 								   values);
- 
- 	result = HeapTupleGetDatum(tuple);
- 
- 	ReleaseBuffer(buffer);
- 
- 	relation_close(rel, AccessShareLock);
- 
- 	PG_RETURN_DATUM(result);
- }
- 
- /*-------------------------------------------------------
-  * bt_page_items()
-  *
-  * Get IndexTupleData set in a btree page
-  *
-  * Usage: SELECT * FROM bt_page_items('t1_pkey', 1);
-  *-------------------------------------------------------
-  */
- 
- /*
-  * cross-call data structure for SRF
-  */
- struct user_args
- {
- 	Page		page;
- 	OffsetNumber offset;
- };
- 
- Datum
- bt_page_items(PG_FUNCTION_ARGS)
- {
- 	text	   *relname = PG_GETARG_TEXT_P(0);
- 	uint32		blkno = PG_GETARG_UINT32(1);
- 	Datum		result;
- 	char	   *values[6];
- 	HeapTuple	tuple;
- 	FuncCallContext *fctx;
- 	MemoryContext mctx;
- 	struct user_args *uargs;
- 
- 	if (!superuser())
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- 				 (errmsg("must be superuser to use pageinspect functions"))));
- 
- 	if (SRF_IS_FIRSTCALL())
- 	{
- 		RangeVar   *relrv;
- 		Relation	rel;
- 		Buffer		buffer;
- 		BTPageOpaque opaque;
- 		TupleDesc	tupleDesc;
- 
- 		fctx = SRF_FIRSTCALL_INIT();
- 
- 		relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
- 		rel = relation_openrv(relrv, AccessShareLock);
- 
- 		if (!IS_INDEX(rel) || !IS_BTREE(rel))
- 			elog(ERROR, "relation \"%s\" is not a btree index",
- 				 RelationGetRelationName(rel));
- 
- 		/*
- 		 * Reject attempts to read non-local temporary relations; we would be
- 		 * likely to get wrong data since we have no visibility into the
- 		 * owning session's local buffers.
- 		 */
- 		if (RELATION_IS_OTHER_TEMP(rel))
- 			ereport(ERROR,
- 					(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- 				errmsg("cannot access temporary tables of other sessions")));
- 
- 		if (blkno == 0)
- 			elog(ERROR, "block 0 is a meta page");
- 
- 		CHECK_RELATION_BLOCK_RANGE(rel, blkno);
- 
- 		buffer = ReadBuffer(rel, blkno);
- 
- 		/*
- 		 * We copy the page into local storage to avoid holding pin on the
- 		 * buffer longer than we must, and possibly failing to release it at
- 		 * all if the calling query doesn't fetch all rows.
- 		 */
- 		mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
- 
- 		uargs = palloc(sizeof(struct user_args));
- 
- 		uargs->page = palloc(BLCKSZ);
- 		memcpy(uargs->page, BufferGetPage(buffer), BLCKSZ);
- 
- 		ReleaseBuffer(buffer);
- 		relation_close(rel, AccessShareLock);
- 
- 		uargs->offset = FirstOffsetNumber;
- 
- 		opaque = (BTPageOpaque) PageGetSpecialPointer(uargs->page);
- 
- 		if (P_ISDELETED(opaque))
- 			elog(NOTICE, "page is deleted");
- 
- 		fctx->max_calls = PageGetMaxOffsetNumber(uargs->page);
- 
- 		/* Build a tuple descriptor for our result type */
- 		if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
- 			elog(ERROR, "return type must be a row type");
- 
- 		fctx->attinmeta = TupleDescGetAttInMetadata(tupleDesc);
- 
- 		fctx->user_fctx = uargs;
- 
- 		MemoryContextSwitchTo(mctx);
- 	}
- 
- 	fctx = SRF_PERCALL_SETUP();
- 	uargs = fctx->user_fctx;
- 
- 	if (fctx->call_cntr < fctx->max_calls)
- 	{
- 		ItemId		id;
- 		IndexTuple	itup;
- 		int			j;
- 		int			off;
- 		int			dlen;
- 		char	   *dump;
- 		char	   *ptr;
- 
- 		id = PageGetItemId(uargs->page, uargs->offset);
- 
- 		if (!ItemIdIsValid(id))
- 			elog(ERROR, "invalid ItemId");
- 
- 		itup = (IndexTuple) PageGetItem(uargs->page, id);
- 
- 		j = 0;
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, "%d", uargs->offset);
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, "(%u,%u)",
- 				 BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
- 				 itup->t_tid.ip_posid);
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, "%d", (int) IndexTupleSize(itup));
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, "%c", IndexTupleHasNulls(itup) ? 't' : 'f');
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, "%c", IndexTupleHasVarwidths(itup) ? 't' : 'f');
- 
- 		ptr = (char *) itup + IndexInfoFindDataOffset(itup->t_info);
- 		dlen = IndexTupleSize(itup) - IndexInfoFindDataOffset(itup->t_info);
- 		dump = palloc0(dlen * 3 + 1);
- 		values[j] = dump;
- 		for (off = 0; off < dlen; off++)
- 		{
- 			if (off > 0)
- 				*dump++ = ' ';
- 			sprintf(dump, "%02x", *(ptr + off) & 0xff);
- 			dump += 2;
- 		}
- 
- 		tuple = BuildTupleFromCStrings(fctx->attinmeta, values);
- 		result = HeapTupleGetDatum(tuple);
- 
- 		uargs->offset = uargs->offset + 1;
- 
- 		SRF_RETURN_NEXT(fctx, result);
- 	}
- 	else
- 	{
- 		pfree(uargs->page);
- 		pfree(uargs);
- 		SRF_RETURN_DONE(fctx);
- 	}
- }
- 
- 
- /* ------------------------------------------------
-  * bt_metap()
-  *
-  * Get a btree's meta-page information
-  *
-  * Usage: SELECT * FROM bt_metap('t1_pkey')
-  * ------------------------------------------------
-  */
- Datum
- bt_metap(PG_FUNCTION_ARGS)
- {
- 	text	   *relname = PG_GETARG_TEXT_P(0);
- 	Datum		result;
- 	Relation	rel;
- 	RangeVar   *relrv;
- 	BTMetaPageData *metad;
- 	TupleDesc	tupleDesc;
- 	int			j;
- 	char	   *values[6];
- 	Buffer		buffer;
- 	Page		page;
- 	HeapTuple	tuple;
- 
- 	if (!superuser())
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- 				 (errmsg("must be superuser to use pageinspect functions"))));
- 
- 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
- 	rel = relation_openrv(relrv, AccessShareLock);
- 
- 	if (!IS_INDEX(rel) || !IS_BTREE(rel))
- 		elog(ERROR, "relation \"%s\" is not a btree index",
- 			 RelationGetRelationName(rel));
- 
- 	/*
- 	 * Reject attempts to read non-local temporary relations; we would be
- 	 * likely to get wrong data since we have no visibility into the owning
- 	 * session's local buffers.
- 	 */
- 	if (RELATION_IS_OTHER_TEMP(rel))
- 		ereport(ERROR,
- 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- 				 errmsg("cannot access temporary tables of other sessions")));
- 
- 	buffer = ReadBuffer(rel, 0);
- 	page = BufferGetPage(buffer);
- 	metad = BTPageGetMeta(page);
- 
- 	/* Build a tuple descriptor for our result type */
- 	if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
- 		elog(ERROR, "return type must be a row type");
- 
- 	j = 0;
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", metad->btm_magic);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", metad->btm_version);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", metad->btm_root);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", metad->btm_level);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", metad->btm_fastroot);
- 	values[j] = palloc(32);
- 	snprintf(values[j++], 32, "%d", metad->btm_fastlevel);
- 
- 	tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
- 								   values);
- 
- 	result = HeapTupleGetDatum(tuple);
- 
- 	ReleaseBuffer(buffer);
- 
- 	relation_close(rel, AccessShareLock);
- 
- 	PG_RETURN_DATUM(result);
- }
--- 0 ----
diff --git a/contrib/pageinspect/fsmfuncs.c b/contrib/pageinspect/fsmfuncs.c
index 0d6bc14..e69de29 100644
*** a/contrib/pageinspect/fsmfuncs.c
--- b/contrib/pageinspect/fsmfuncs.c
***************
*** 1,58 ****
- /*-------------------------------------------------------------------------
-  *
-  * fsmfuncs.c
-  *	  Functions to investigate FSM pages
-  *
-  * These functions are restricted to superusers for the fear of introducing
-  * security holes if the input checking isn't as water-tight as it should.
-  * You'd need to be superuser to obtain a raw page image anyway, so
-  * there's hardly any use case for using these without superuser-rights
-  * anyway.
-  *
-  * Copyright (c) 2007-2011, PostgreSQL Global Development Group
-  *
-  * IDENTIFICATION
-  *	  contrib/pageinspect/fsmfuncs.c
-  *
-  *-------------------------------------------------------------------------
-  */
- 
- #include "postgres.h"
- #include "storage/fsm_internals.h"
- #include "utils/builtins.h"
- #include "miscadmin.h"
- #include "funcapi.h"
- 
- Datum		fsm_page_contents(PG_FUNCTION_ARGS);
- 
- /*
-  * Dumps the contents of a FSM page.
-  */
- PG_FUNCTION_INFO_V1(fsm_page_contents);
- 
- Datum
- fsm_page_contents(PG_FUNCTION_ARGS)
- {
- 	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
- 	StringInfoData sinfo;
- 	FSMPage		fsmpage;
- 	int			i;
- 
- 	if (!superuser())
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- 				 (errmsg("must be superuser to use raw page functions"))));
- 
- 	fsmpage = (FSMPage) PageGetContents(VARDATA(raw_page));
- 
- 	initStringInfo(&sinfo);
- 
- 	for (i = 0; i < NodesPerPage; i++)
- 	{
- 		if (fsmpage->fp_nodes[i] != 0)
- 			appendStringInfo(&sinfo, "%d: %d\n", i, fsmpage->fp_nodes[i]);
- 	}
- 	appendStringInfo(&sinfo, "fp_next_slot: %d\n", fsmpage->fp_next_slot);
- 
- 	PG_RETURN_TEXT_P(cstring_to_text(sinfo.data));
- }
--- 0 ----
diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c
index fa50655..e69de29 100644
*** a/contrib/pageinspect/heapfuncs.c
--- b/contrib/pageinspect/heapfuncs.c
***************
*** 1,225 ****
- /*-------------------------------------------------------------------------
-  *
-  * heapfuncs.c
-  *	  Functions to investigate heap pages
-  *
-  * We check the input to these functions for corrupt pointers etc. that
-  * might cause crashes, but at the same time we try to print out as much
-  * information as possible, even if it's nonsense. That's because if a
-  * page is corrupt, we don't know why and how exactly it is corrupt, so we
-  * let the user judge it.
-  *
-  * These functions are restricted to superusers for the fear of introducing
-  * security holes if the input checking isn't as water-tight as it should be.
-  * You'd need to be superuser to obtain a raw page image anyway, so
-  * there's hardly any use case for using these without superuser-rights
-  * anyway.
-  *
-  * Copyright (c) 2007-2011, PostgreSQL Global Development Group
-  *
-  * IDENTIFICATION
-  *	  contrib/pageinspect/heapfuncs.c
-  *
-  *-------------------------------------------------------------------------
-  */
- 
- #include "postgres.h"
- 
- #include "funcapi.h"
- #include "utils/builtins.h"
- #include "miscadmin.h"
- 
- Datum		heap_page_items(PG_FUNCTION_ARGS);
- 
- 
- /*
-  * bits_to_text
-  *
-  * Converts a bits8-array of 'len' bits to a human-readable
-  * c-string representation.
-  */
- static char *
- bits_to_text(bits8 *bits, int len)
- {
- 	int			i;
- 	char	   *str;
- 
- 	str = palloc(len + 1);
- 
- 	for (i = 0; i < len; i++)
- 		str[i] = (bits[(i / 8)] & (1 << (i % 8))) ? '1' : '0';
- 
- 	str[i] = '\0';
- 
- 	return str;
- }
- 
- 
- /*
-  * heap_page_items
-  *
-  * Allows inspection of line pointers and tuple headers of a heap page.
-  */
- PG_FUNCTION_INFO_V1(heap_page_items);
- 
- typedef struct heap_page_items_state
- {
- 	TupleDesc	tupd;
- 	Page		page;
- 	uint16		offset;
- } heap_page_items_state;
- 
- Datum
- heap_page_items(PG_FUNCTION_ARGS)
- {
- 	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
- 	heap_page_items_state *inter_call_data = NULL;
- 	FuncCallContext *fctx;
- 	int			raw_page_size;
- 
- 	if (!superuser())
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- 				 (errmsg("must be superuser to use raw page functions"))));
- 
- 	raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
- 
- 	if (SRF_IS_FIRSTCALL())
- 	{
- 		TupleDesc	tupdesc;
- 		MemoryContext mctx;
- 
- 		if (raw_page_size < SizeOfPageHeaderData)
- 			ereport(ERROR,
- 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- 				  errmsg("input page too small (%d bytes)", raw_page_size)));
- 
- 		fctx = SRF_FIRSTCALL_INIT();
- 		mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
- 
- 		inter_call_data = palloc(sizeof(heap_page_items_state));
- 
- 		/* Build a tuple descriptor for our result type */
- 		if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
- 			elog(ERROR, "return type must be a row type");
- 
- 		inter_call_data->tupd = tupdesc;
- 
- 		inter_call_data->offset = FirstOffsetNumber;
- 		inter_call_data->page = VARDATA(raw_page);
- 
- 		fctx->max_calls = PageGetMaxOffsetNumber(inter_call_data->page);
- 		fctx->user_fctx = inter_call_data;
- 
- 		MemoryContextSwitchTo(mctx);
- 	}
- 
- 	fctx = SRF_PERCALL_SETUP();
- 	inter_call_data = fctx->user_fctx;
- 
- 	if (fctx->call_cntr < fctx->max_calls)
- 	{
- 		Page		page = inter_call_data->page;
- 		HeapTuple	resultTuple;
- 		Datum		result;
- 		ItemId		id;
- 		Datum		values[13];
- 		bool		nulls[13];
- 		uint16		lp_offset;
- 		uint16		lp_flags;
- 		uint16		lp_len;
- 
- 		memset(nulls, 0, sizeof(nulls));
- 
- 		/* Extract information from the line pointer */
- 
- 		id = PageGetItemId(page, inter_call_data->offset);
- 
- 		lp_offset = ItemIdGetOffset(id);
- 		lp_flags = ItemIdGetFlags(id);
- 		lp_len = ItemIdGetLength(id);
- 
- 		values[0] = UInt16GetDatum(inter_call_data->offset);
- 		values[1] = UInt16GetDatum(lp_offset);
- 		values[2] = UInt16GetDatum(lp_flags);
- 		values[3] = UInt16GetDatum(lp_len);
- 
- 		/*
- 		 * We do just enough validity checking to make sure we don't reference
- 		 * data outside the page passed to us. The page could be corrupt in
- 		 * many other ways, but at least we won't crash.
- 		 */
- 		if (ItemIdHasStorage(id) &&
- 			lp_len >= sizeof(HeapTupleHeader) &&
- 			lp_offset == MAXALIGN(lp_offset) &&
- 			lp_offset + lp_len <= raw_page_size)
- 		{
- 			HeapTupleHeader tuphdr;
- 			int			bits_len;
- 
- 			/* Extract information from the tuple header */
- 
- 			tuphdr = (HeapTupleHeader) PageGetItem(page, id);
- 
- 			values[4] = UInt32GetDatum(HeapTupleHeaderGetXmin(tuphdr));
- 			values[5] = UInt32GetDatum(HeapTupleHeaderGetXmax(tuphdr));
- 			values[6] = UInt32GetDatum(HeapTupleHeaderGetRawCommandId(tuphdr)); /* shared with xvac */
- 			values[7] = PointerGetDatum(&tuphdr->t_ctid);
- 			values[8] = UInt32GetDatum(tuphdr->t_infomask2);
- 			values[9] = UInt32GetDatum(tuphdr->t_infomask);
- 			values[10] = UInt8GetDatum(tuphdr->t_hoff);
- 
- 			/*
- 			 * We already checked that the item as is completely within the
- 			 * raw page passed to us, with the length given in the line
- 			 * pointer.. Let's check that t_hoff doesn't point over lp_len,
- 			 * before using it to access t_bits and oid.
- 			 */
- 			if (tuphdr->t_hoff >= sizeof(HeapTupleHeader) &&
- 				tuphdr->t_hoff <= lp_len)
- 			{
- 				if (tuphdr->t_infomask & HEAP_HASNULL)
- 				{
- 					bits_len = tuphdr->t_hoff -
- 						(((char *) tuphdr->t_bits) -((char *) tuphdr));
- 
- 					values[11] = CStringGetTextDatum(
- 								 bits_to_text(tuphdr->t_bits, bits_len * 8));
- 				}
- 				else
- 					nulls[11] = true;
- 
- 				if (tuphdr->t_infomask & HEAP_HASOID)
- 					values[12] = HeapTupleHeaderGetOid(tuphdr);
- 				else
- 					nulls[12] = true;
- 			}
- 			else
- 			{
- 				nulls[11] = true;
- 				nulls[12] = true;
- 			}
- 		}
- 		else
- 		{
- 			/*
- 			 * The line pointer is not used, or it's invalid. Set the rest of
- 			 * the fields to NULL
- 			 */
- 			int			i;
- 
- 			for (i = 4; i <= 12; i++)
- 				nulls[i] = true;
- 		}
- 
- 		/* Build and return the result tuple. */
- 		resultTuple = heap_form_tuple(inter_call_data->tupd, values, nulls);
- 		result = HeapTupleGetDatum(resultTuple);
- 
- 		inter_call_data->offset++;
- 
- 		SRF_RETURN_NEXT(fctx, result);
- 	}
- 	else
- 		SRF_RETURN_DONE(fctx);
- }
--- 0 ----
diff --git a/contrib/pageinspect/pageinspect--1.0.sql b/contrib/pageinspect/pageinspect--1.0.sql
index 5613956..e69de29 100644
*** a/contrib/pageinspect/pageinspect--1.0.sql
--- b/contrib/pageinspect/pageinspect--1.0.sql
***************
*** 1,107 ****
- /* contrib/pageinspect/pageinspect--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pageinspect" to load this file. \quit
- 
- --
- -- get_raw_page()
- --
- CREATE FUNCTION get_raw_page(text, int4)
- RETURNS bytea
- AS 'MODULE_PATHNAME', 'get_raw_page'
- LANGUAGE C STRICT;
- 
- CREATE FUNCTION get_raw_page(text, text, int4)
- RETURNS bytea
- AS 'MODULE_PATHNAME', 'get_raw_page_fork'
- LANGUAGE C STRICT;
- 
- --
- -- page_header()
- --
- CREATE FUNCTION page_header(IN page bytea,
-     OUT lsn text,
-     OUT tli smallint,
-     OUT flags smallint,
-     OUT lower smallint,
-     OUT upper smallint,
-     OUT special smallint,
-     OUT pagesize smallint,
-     OUT version smallint,
-     OUT prune_xid xid)
- AS 'MODULE_PATHNAME', 'page_header'
- LANGUAGE C STRICT;
- 
- --
- -- heap_page_items()
- --
- CREATE FUNCTION heap_page_items(IN page bytea,
-     OUT lp smallint,
-     OUT lp_off smallint,
-     OUT lp_flags smallint,
-     OUT lp_len smallint,
-     OUT t_xmin xid,
-     OUT t_xmax xid,
-     OUT t_field3 int4,
-     OUT t_ctid tid,
-     OUT t_infomask2 integer,
-     OUT t_infomask integer,
-     OUT t_hoff smallint,
-     OUT t_bits text,
-     OUT t_oid oid)
- RETURNS SETOF record
- AS 'MODULE_PATHNAME', 'heap_page_items'
- LANGUAGE C STRICT;
- 
- --
- -- bt_metap()
- --
- CREATE FUNCTION bt_metap(IN relname text,
-     OUT magic int4,
-     OUT version int4,
-     OUT root int4,
-     OUT level int4,
-     OUT fastroot int4,
-     OUT fastlevel int4)
- AS 'MODULE_PATHNAME', 'bt_metap'
- LANGUAGE C STRICT;
- 
- --
- -- bt_page_stats()
- --
- CREATE FUNCTION bt_page_stats(IN relname text, IN blkno int4,
-     OUT blkno int4,
-     OUT type "char",
-     OUT live_items int4,
-     OUT dead_items int4,
-     OUT avg_item_size int4,
-     OUT page_size int4,
-     OUT free_size int4,
-     OUT btpo_prev int4,
-     OUT btpo_next int4,
-     OUT btpo int4,
-     OUT btpo_flags int4)
- AS 'MODULE_PATHNAME', 'bt_page_stats'
- LANGUAGE C STRICT;
- 
- --
- -- bt_page_items()
- --
- CREATE FUNCTION bt_page_items(IN relname text, IN blkno int4,
-     OUT itemoffset smallint,
-     OUT ctid tid,
-     OUT itemlen smallint,
-     OUT nulls bool,
-     OUT vars bool,
-     OUT data text)
- RETURNS SETOF record
- AS 'MODULE_PATHNAME', 'bt_page_items'
- LANGUAGE C STRICT;
- 
- --
- -- fsm_page_contents()
- --
- CREATE FUNCTION fsm_page_contents(IN page bytea)
- RETURNS text
- AS 'MODULE_PATHNAME', 'fsm_page_contents'
- LANGUAGE C STRICT;
--- 0 ----
diff --git a/contrib/pageinspect/pageinspect--unpackaged--1.0.sql b/contrib/pageinspect/pageinspect--unpackaged--1.0.sql
index 13e2167..e69de29 100644
*** a/contrib/pageinspect/pageinspect--unpackaged--1.0.sql
--- b/contrib/pageinspect/pageinspect--unpackaged--1.0.sql
***************
*** 1,31 ****
- /* contrib/pageinspect/pageinspect--unpackaged--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pageinspect" to load this file. \quit
- 
- DROP FUNCTION heap_page_items(bytea);
- CREATE FUNCTION heap_page_items(IN page bytea,
- 	OUT lp smallint,
- 	OUT lp_off smallint,
- 	OUT lp_flags smallint,
- 	OUT lp_len smallint,
- 	OUT t_xmin xid,
- 	OUT t_xmax xid,
- 	OUT t_field3 int4,
- 	OUT t_ctid tid,
- 	OUT t_infomask2 integer,
- 	OUT t_infomask integer,
- 	OUT t_hoff smallint,
- 	OUT t_bits text,
- 	OUT t_oid oid)
- RETURNS SETOF record
- AS 'MODULE_PATHNAME', 'heap_page_items'
- LANGUAGE C STRICT;
- 
- ALTER EXTENSION pageinspect ADD function get_raw_page(text,integer);
- ALTER EXTENSION pageinspect ADD function get_raw_page(text,text,integer);
- ALTER EXTENSION pageinspect ADD function page_header(bytea);
- ALTER EXTENSION pageinspect ADD function bt_metap(text);
- ALTER EXTENSION pageinspect ADD function bt_page_stats(text,integer);
- ALTER EXTENSION pageinspect ADD function bt_page_items(text,integer);
- ALTER EXTENSION pageinspect ADD function fsm_page_contents(bytea);
--- 0 ----
diff --git a/contrib/pageinspect/pageinspect.control b/contrib/pageinspect/pageinspect.control
index f9da0e8..e69de29 100644
*** a/contrib/pageinspect/pageinspect.control
--- b/contrib/pageinspect/pageinspect.control
***************
*** 1,5 ****
- # pageinspect extension
- comment = 'inspect the contents of database pages at a low level'
- default_version = '1.0'
- module_pathname = '$libdir/pageinspect'
- relocatable = true
--- 0 ----
diff --git a/contrib/pageinspect/rawpage.c b/contrib/pageinspect/rawpage.c
index 362ad84..e69de29 100644
*** a/contrib/pageinspect/rawpage.c
--- b/contrib/pageinspect/rawpage.c
***************
*** 1,229 ****
- /*-------------------------------------------------------------------------
-  *
-  * rawpage.c
-  *	  Functions to extract a raw page as bytea and inspect it
-  *
-  * Access-method specific inspection functions are in separate files.
-  *
-  * Copyright (c) 2007-2011, PostgreSQL Global Development Group
-  *
-  * IDENTIFICATION
-  *	  contrib/pageinspect/rawpage.c
-  *
-  *-------------------------------------------------------------------------
-  */
- 
- #include "postgres.h"
- 
- #include "catalog/catalog.h"
- #include "catalog/namespace.h"
- #include "funcapi.h"
- #include "miscadmin.h"
- #include "storage/bufmgr.h"
- #include "utils/builtins.h"
- #include "utils/rel.h"
- 
- PG_MODULE_MAGIC;
- 
- Datum		get_raw_page(PG_FUNCTION_ARGS);
- Datum		get_raw_page_fork(PG_FUNCTION_ARGS);
- Datum		page_header(PG_FUNCTION_ARGS);
- 
- static bytea *get_raw_page_internal(text *relname, ForkNumber forknum,
- 					  BlockNumber blkno);
- 
- 
- /*
-  * get_raw_page
-  *
-  * Returns a copy of a page from shared buffers as a bytea
-  */
- PG_FUNCTION_INFO_V1(get_raw_page);
- 
- Datum
- get_raw_page(PG_FUNCTION_ARGS)
- {
- 	text	   *relname = PG_GETARG_TEXT_P(0);
- 	uint32		blkno = PG_GETARG_UINT32(1);
- 	bytea	   *raw_page;
- 
- 	/*
- 	 * We don't normally bother to check the number of arguments to a C
- 	 * function, but here it's needed for safety because early 8.4 beta
- 	 * releases mistakenly redefined get_raw_page() as taking three arguments.
- 	 */
- 	if (PG_NARGS() != 2)
- 		ereport(ERROR,
- 				(errmsg("wrong number of arguments to get_raw_page()"),
- 				 errhint("Run the updated pageinspect.sql script.")));
- 
- 	raw_page = get_raw_page_internal(relname, MAIN_FORKNUM, blkno);
- 
- 	PG_RETURN_BYTEA_P(raw_page);
- }
- 
- /*
-  * get_raw_page_fork
-  *
-  * Same, for any fork
-  */
- PG_FUNCTION_INFO_V1(get_raw_page_fork);
- 
- Datum
- get_raw_page_fork(PG_FUNCTION_ARGS)
- {
- 	text	   *relname = PG_GETARG_TEXT_P(0);
- 	text	   *forkname = PG_GETARG_TEXT_P(1);
- 	uint32		blkno = PG_GETARG_UINT32(2);
- 	bytea	   *raw_page;
- 	ForkNumber	forknum;
- 
- 	forknum = forkname_to_number(text_to_cstring(forkname));
- 
- 	raw_page = get_raw_page_internal(relname, forknum, blkno);
- 
- 	PG_RETURN_BYTEA_P(raw_page);
- }
- 
- /*
-  * workhorse
-  */
- static bytea *
- get_raw_page_internal(text *relname, ForkNumber forknum, BlockNumber blkno)
- {
- 	bytea	   *raw_page;
- 	RangeVar   *relrv;
- 	Relation	rel;
- 	char	   *raw_page_data;
- 	Buffer		buf;
- 
- 	if (!superuser())
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- 				 (errmsg("must be superuser to use raw functions"))));
- 
- 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
- 	rel = relation_openrv(relrv, AccessShareLock);
- 
- 	/* Check that this relation has storage */
- 	if (rel->rd_rel->relkind == RELKIND_VIEW)
- 		ereport(ERROR,
- 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
- 				 errmsg("cannot get raw page from view \"%s\"",
- 						RelationGetRelationName(rel))));
- 	if (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
- 		ereport(ERROR,
- 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
- 				 errmsg("cannot get raw page from composite type \"%s\"",
- 						RelationGetRelationName(rel))));
- 	if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
- 		ereport(ERROR,
- 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
- 				 errmsg("cannot get raw page from foreign table \"%s\"",
- 						RelationGetRelationName(rel))));
- 
- 	/*
- 	 * Reject attempts to read non-local temporary relations; we would be
- 	 * likely to get wrong data since we have no visibility into the owning
- 	 * session's local buffers.
- 	 */
- 	if (RELATION_IS_OTHER_TEMP(rel))
- 		ereport(ERROR,
- 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- 				 errmsg("cannot access temporary tables of other sessions")));
- 
- 	if (blkno >= RelationGetNumberOfBlocks(rel))
- 		elog(ERROR, "block number %u is out of range for relation \"%s\"",
- 			 blkno, RelationGetRelationName(rel));
- 
- 	/* Initialize buffer to copy to */
- 	raw_page = (bytea *) palloc(BLCKSZ + VARHDRSZ);
- 	SET_VARSIZE(raw_page, BLCKSZ + VARHDRSZ);
- 	raw_page_data = VARDATA(raw_page);
- 
- 	/* Take a verbatim copy of the page */
- 
- 	buf = ReadBufferExtended(rel, forknum, blkno, RBM_NORMAL, NULL);
- 	LockBuffer(buf, BUFFER_LOCK_SHARE);
- 
- 	memcpy(raw_page_data, BufferGetPage(buf), BLCKSZ);
- 
- 	LockBuffer(buf, BUFFER_LOCK_UNLOCK);
- 	ReleaseBuffer(buf);
- 
- 	relation_close(rel, AccessShareLock);
- 
- 	return raw_page;
- }
- 
- /*
-  * page_header
-  *
-  * Allows inspection of page header fields of a raw page
-  */
- 
- PG_FUNCTION_INFO_V1(page_header);
- 
- Datum
- page_header(PG_FUNCTION_ARGS)
- {
- 	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
- 	int			raw_page_size;
- 
- 	TupleDesc	tupdesc;
- 
- 	Datum		result;
- 	HeapTuple	tuple;
- 	Datum		values[9];
- 	bool		nulls[9];
- 
- 	PageHeader	page;
- 	XLogRecPtr	lsn;
- 	char		lsnchar[64];
- 
- 	if (!superuser())
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- 				 (errmsg("must be superuser to use raw page functions"))));
- 
- 	raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
- 
- 	/*
- 	 * Check that enough data was supplied, so that we don't try to access
- 	 * fields outside the supplied buffer.
- 	 */
- 	if (raw_page_size < sizeof(PageHeaderData))
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- 				 errmsg("input page too small (%d bytes)", raw_page_size)));
- 
- 	page = (PageHeader) VARDATA(raw_page);
- 
- 	/* Build a tuple descriptor for our result type */
- 	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
- 		elog(ERROR, "return type must be a row type");
- 
- 	/* Extract information from the page header */
- 
- 	lsn = PageGetLSN(page);
- 	snprintf(lsnchar, sizeof(lsnchar), "%X/%X", lsn.xlogid, lsn.xrecoff);
- 
- 	values[0] = CStringGetTextDatum(lsnchar);
- 	values[1] = UInt16GetDatum(PageGetTLI(page));
- 	values[2] = UInt16GetDatum(page->pd_flags);
- 	values[3] = UInt16GetDatum(page->pd_lower);
- 	values[4] = UInt16GetDatum(page->pd_upper);
- 	values[5] = UInt16GetDatum(page->pd_special);
- 	values[6] = UInt16GetDatum(PageGetPageSize(page));
- 	values[7] = UInt16GetDatum(PageGetPageLayoutVersion(page));
- 	values[8] = TransactionIdGetDatum(page->pd_prune_xid);
- 
- 	/* Build and return the tuple. */
- 
- 	memset(nulls, 0, sizeof(nulls));
- 
- 	tuple = heap_form_tuple(tupdesc, values, nulls);
- 	result = HeapTupleGetDatum(tuple);
- 
- 	PG_RETURN_DATUM(result);
- }
--- 0 ----
diff --git a/contrib/pg_buffercache/Makefile b/contrib/pg_buffercache/Makefile
index 323c0ac..e69de29 100644
*** a/contrib/pg_buffercache/Makefile
--- b/contrib/pg_buffercache/Makefile
***************
*** 1,18 ****
- # contrib/pg_buffercache/Makefile
- 
- MODULE_big = pg_buffercache
- OBJS = pg_buffercache_pages.o
- 
- EXTENSION = pg_buffercache
- DATA = pg_buffercache--1.0.sql pg_buffercache--unpackaged--1.0.sql
- 
- ifdef USE_PGXS
- PG_CONFIG = pg_config
- PGXS := $(shell $(PG_CONFIG) --pgxs)
- include $(PGXS)
- else
- subdir = contrib/pg_buffercache
- top_builddir = ../..
- include $(top_builddir)/src/Makefile.global
- include $(top_srcdir)/contrib/contrib-global.mk
- endif
--- 0 ----
diff --git a/contrib/pg_buffercache/pg_buffercache--1.0.sql b/contrib/pg_buffercache/pg_buffercache--1.0.sql
index 4ca4c44..e69de29 100644
*** a/contrib/pg_buffercache/pg_buffercache--1.0.sql
--- b/contrib/pg_buffercache/pg_buffercache--1.0.sql
***************
*** 1,20 ****
- /* contrib/pg_buffercache/pg_buffercache--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pg_buffercache" to load this file. \quit
- 
- -- Register the function.
- CREATE FUNCTION pg_buffercache_pages()
- RETURNS SETOF RECORD
- AS 'MODULE_PATHNAME', 'pg_buffercache_pages'
- LANGUAGE C;
- 
- -- Create a view for convenient access.
- CREATE VIEW pg_buffercache AS
- 	SELECT P.* FROM pg_buffercache_pages() AS P
- 	(bufferid integer, relfilenode oid, reltablespace oid, reldatabase oid,
- 	 relforknumber int2, relblocknumber int8, isdirty bool, usagecount int2);
- 
- -- Don't want these to be available to public.
- REVOKE ALL ON FUNCTION pg_buffercache_pages() FROM PUBLIC;
- REVOKE ALL ON pg_buffercache FROM PUBLIC;
--- 0 ----
diff --git a/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql b/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
index bfe6e52..e69de29 100644
*** a/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
--- b/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
***************
*** 1,7 ****
- /* contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pg_buffercache" to load this file. \quit
- 
- ALTER EXTENSION pg_buffercache ADD function pg_buffercache_pages();
- ALTER EXTENSION pg_buffercache ADD view pg_buffercache;
--- 0 ----
diff --git a/contrib/pg_buffercache/pg_buffercache.control b/contrib/pg_buffercache/pg_buffercache.control
index 709513c..e69de29 100644
*** a/contrib/pg_buffercache/pg_buffercache.control
--- b/contrib/pg_buffercache/pg_buffercache.control
***************
*** 1,5 ****
- # pg_buffercache extension
- comment = 'examine the shared buffer cache'
- default_version = '1.0'
- module_pathname = '$libdir/pg_buffercache'
- relocatable = true
--- 0 ----
diff --git a/contrib/pg_buffercache/pg_buffercache_pages.c b/contrib/pg_buffercache/pg_buffercache_pages.c
index 27e52b3..e69de29 100644
*** a/contrib/pg_buffercache/pg_buffercache_pages.c
--- b/contrib/pg_buffercache/pg_buffercache_pages.c
***************
*** 1,217 ****
- /*-------------------------------------------------------------------------
-  *
-  * pg_buffercache_pages.c
-  *	  display some contents of the buffer cache
-  *
-  *	  contrib/pg_buffercache/pg_buffercache_pages.c
-  *-------------------------------------------------------------------------
-  */
- #include "postgres.h"
- 
- #include "catalog/pg_type.h"
- #include "funcapi.h"
- #include "storage/buf_internals.h"
- #include "storage/bufmgr.h"
- 
- 
- #define NUM_BUFFERCACHE_PAGES_ELEM	8
- 
- PG_MODULE_MAGIC;
- 
- Datum		pg_buffercache_pages(PG_FUNCTION_ARGS);
- 
- 
- /*
-  * Record structure holding the to be exposed cache data.
-  */
- typedef struct
- {
- 	uint32		bufferid;
- 	Oid			relfilenode;
- 	Oid			reltablespace;
- 	Oid			reldatabase;
- 	ForkNumber	forknum;
- 	BlockNumber blocknum;
- 	bool		isvalid;
- 	bool		isdirty;
- 	uint16		usagecount;
- } BufferCachePagesRec;
- 
- 
- /*
-  * Function context for data persisting over repeated calls.
-  */
- typedef struct
- {
- 	TupleDesc	tupdesc;
- 	BufferCachePagesRec *record;
- } BufferCachePagesContext;
- 
- 
- /*
-  * Function returning data from the shared buffer cache - buffer number,
-  * relation node/tablespace/database/blocknum and dirty indicator.
-  */
- PG_FUNCTION_INFO_V1(pg_buffercache_pages);
- 
- Datum
- pg_buffercache_pages(PG_FUNCTION_ARGS)
- {
- 	FuncCallContext *funcctx;
- 	Datum		result;
- 	MemoryContext oldcontext;
- 	BufferCachePagesContext *fctx;		/* User function context. */
- 	TupleDesc	tupledesc;
- 	HeapTuple	tuple;
- 
- 	if (SRF_IS_FIRSTCALL())
- 	{
- 		int			i;
- 		volatile BufferDesc *bufHdr;
- 
- 		funcctx = SRF_FIRSTCALL_INIT();
- 
- 		/* Switch context when allocating stuff to be used in later calls */
- 		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
- 
- 		/* Create a user function context for cross-call persistence */
- 		fctx = (BufferCachePagesContext *) palloc(sizeof(BufferCachePagesContext));
- 
- 		/* Construct a tuple descriptor for the result rows. */
- 		tupledesc = CreateTemplateTupleDesc(NUM_BUFFERCACHE_PAGES_ELEM, false);
- 		TupleDescInitEntry(tupledesc, (AttrNumber) 1, "bufferid",
- 						   INT4OID, -1, 0);
- 		TupleDescInitEntry(tupledesc, (AttrNumber) 2, "relfilenode",
- 						   OIDOID, -1, 0);
- 		TupleDescInitEntry(tupledesc, (AttrNumber) 3, "reltablespace",
- 						   OIDOID, -1, 0);
- 		TupleDescInitEntry(tupledesc, (AttrNumber) 4, "reldatabase",
- 						   OIDOID, -1, 0);
- 		TupleDescInitEntry(tupledesc, (AttrNumber) 5, "relforknumber",
- 						   INT2OID, -1, 0);
- 		TupleDescInitEntry(tupledesc, (AttrNumber) 6, "relblocknumber",
- 						   INT8OID, -1, 0);
- 		TupleDescInitEntry(tupledesc, (AttrNumber) 7, "isdirty",
- 						   BOOLOID, -1, 0);
- 		TupleDescInitEntry(tupledesc, (AttrNumber) 8, "usage_count",
- 						   INT2OID, -1, 0);
- 
- 		fctx->tupdesc = BlessTupleDesc(tupledesc);
- 
- 		/* Allocate NBuffers worth of BufferCachePagesRec records. */
- 		fctx->record = (BufferCachePagesRec *) palloc(sizeof(BufferCachePagesRec) * NBuffers);
- 
- 		/* Set max calls and remember the user function context. */
- 		funcctx->max_calls = NBuffers;
- 		funcctx->user_fctx = fctx;
- 
- 		/* Return to original context when allocating transient memory */
- 		MemoryContextSwitchTo(oldcontext);
- 
- 		/*
- 		 * To get a consistent picture of the buffer state, we must lock all
- 		 * partitions of the buffer map.  Needless to say, this is horrible
- 		 * for concurrency.  Must grab locks in increasing order to avoid
- 		 * possible deadlocks.
- 		 */
- 		for (i = 0; i < NUM_BUFFER_PARTITIONS; i++)
- 			LWLockAcquire(FirstBufMappingLock + i, LW_SHARED);
- 
- 		/*
- 		 * Scan though all the buffers, saving the relevant fields in the
- 		 * fctx->record structure.
- 		 */
- 		for (i = 0, bufHdr = BufferDescriptors; i < NBuffers; i++, bufHdr++)
- 		{
- 			/* Lock each buffer header before inspecting. */
- 			LockBufHdr(bufHdr);
- 
- 			fctx->record[i].bufferid = BufferDescriptorGetBuffer(bufHdr);
- 			fctx->record[i].relfilenode = bufHdr->tag.rnode.relNode;
- 			fctx->record[i].reltablespace = bufHdr->tag.rnode.spcNode;
- 			fctx->record[i].reldatabase = bufHdr->tag.rnode.dbNode;
- 			fctx->record[i].forknum = bufHdr->tag.forkNum;
- 			fctx->record[i].blocknum = bufHdr->tag.blockNum;
- 			fctx->record[i].usagecount = bufHdr->usage_count;
- 
- 			if (bufHdr->flags & BM_DIRTY)
- 				fctx->record[i].isdirty = true;
- 			else
- 				fctx->record[i].isdirty = false;
- 
- 			/* Note if the buffer is valid, and has storage created */
- 			if ((bufHdr->flags & BM_VALID) && (bufHdr->flags & BM_TAG_VALID))
- 				fctx->record[i].isvalid = true;
- 			else
- 				fctx->record[i].isvalid = false;
- 
- 			UnlockBufHdr(bufHdr);
- 		}
- 
- 		/*
- 		 * And release locks.  We do this in reverse order for two reasons:
- 		 * (1) Anyone else who needs more than one of the locks will be trying
- 		 * to lock them in increasing order; we don't want to release the
- 		 * other process until it can get all the locks it needs. (2) This
- 		 * avoids O(N^2) behavior inside LWLockRelease.
- 		 */
- 		for (i = NUM_BUFFER_PARTITIONS; --i >= 0;)
- 			LWLockRelease(FirstBufMappingLock + i);
- 	}
- 
- 	funcctx = SRF_PERCALL_SETUP();
- 
- 	/* Get the saved state */
- 	fctx = funcctx->user_fctx;
- 
- 	if (funcctx->call_cntr < funcctx->max_calls)
- 	{
- 		uint32		i = funcctx->call_cntr;
- 		Datum		values[NUM_BUFFERCACHE_PAGES_ELEM];
- 		bool		nulls[NUM_BUFFERCACHE_PAGES_ELEM];
- 
- 		values[0] = Int32GetDatum(fctx->record[i].bufferid);
- 		nulls[0] = false;
- 
- 		/*
- 		 * Set all fields except the bufferid to null if the buffer is unused
- 		 * or not valid.
- 		 */
- 		if (fctx->record[i].blocknum == InvalidBlockNumber ||
- 			fctx->record[i].isvalid == false)
- 		{
- 			nulls[1] = true;
- 			nulls[2] = true;
- 			nulls[3] = true;
- 			nulls[4] = true;
- 			nulls[5] = true;
- 			nulls[6] = true;
- 			nulls[7] = true;
- 		}
- 		else
- 		{
- 			values[1] = ObjectIdGetDatum(fctx->record[i].relfilenode);
- 			nulls[1] = false;
- 			values[2] = ObjectIdGetDatum(fctx->record[i].reltablespace);
- 			nulls[2] = false;
- 			values[3] = ObjectIdGetDatum(fctx->record[i].reldatabase);
- 			nulls[3] = false;
- 			values[4] = ObjectIdGetDatum(fctx->record[i].forknum);
- 			nulls[4] = false;
- 			values[5] = Int64GetDatum((int64) fctx->record[i].blocknum);
- 			nulls[5] = false;
- 			values[6] = BoolGetDatum(fctx->record[i].isdirty);
- 			nulls[6] = false;
- 			values[7] = Int16GetDatum(fctx->record[i].usagecount);
- 			nulls[7] = false;
- 		}
- 
- 		/* Build and return the tuple. */
- 		tuple = heap_form_tuple(fctx->tupdesc, values, nulls);
- 		result = HeapTupleGetDatum(tuple);
- 
- 		SRF_RETURN_NEXT(funcctx, result);
- 	}
- 	else
- 		SRF_RETURN_DONE(funcctx);
- }
--- 0 ----
diff --git a/contrib/pg_freespacemap/Makefile b/contrib/pg_freespacemap/Makefile
index b2e3ba3..e69de29 100644
*** a/contrib/pg_freespacemap/Makefile
--- b/contrib/pg_freespacemap/Makefile
***************
*** 1,18 ****
- # contrib/pg_freespacemap/Makefile
- 
- MODULE_big = pg_freespacemap
- OBJS = pg_freespacemap.o
- 
- EXTENSION = pg_freespacemap
- DATA = pg_freespacemap--1.0.sql pg_freespacemap--unpackaged--1.0.sql
- 
- ifdef USE_PGXS
- PG_CONFIG = pg_config
- PGXS := $(shell $(PG_CONFIG) --pgxs)
- include $(PGXS)
- else
- subdir = contrib/pg_freespacemap
- top_builddir = ../..
- include $(top_builddir)/src/Makefile.global
- include $(top_srcdir)/contrib/contrib-global.mk
- endif
--- 0 ----
diff --git a/contrib/pg_freespacemap/pg_freespacemap--1.0.sql b/contrib/pg_freespacemap/pg_freespacemap--1.0.sql
index 2adb52a..e69de29 100644
*** a/contrib/pg_freespacemap/pg_freespacemap--1.0.sql
--- b/contrib/pg_freespacemap/pg_freespacemap--1.0.sql
***************
*** 1,25 ****
- /* contrib/pg_freespacemap/pg_freespacemap--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pg_freespacemap" to load this file. \quit
- 
- -- Register the C function.
- CREATE FUNCTION pg_freespace(regclass, bigint)
- RETURNS int2
- AS 'MODULE_PATHNAME', 'pg_freespace'
- LANGUAGE C STRICT;
- 
- -- pg_freespace shows the recorded space avail at each block in a relation
- CREATE FUNCTION
-   pg_freespace(rel regclass, blkno OUT bigint, avail OUT int2)
- RETURNS SETOF RECORD
- AS $$
-   SELECT blkno, pg_freespace($1, blkno) AS avail
-   FROM generate_series(0, pg_relation_size($1) / current_setting('block_size')::bigint - 1) AS blkno;
- $$
- LANGUAGE SQL;
- 
- 
- -- Don't want these to be available to public.
- REVOKE ALL ON FUNCTION pg_freespace(regclass, bigint) FROM PUBLIC;
- REVOKE ALL ON FUNCTION pg_freespace(regclass) FROM PUBLIC;
--- 0 ----
diff --git a/contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql b/contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
index 5e8d7e4..e69de29 100644
*** a/contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
--- b/contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
***************
*** 1,7 ****
- /* contrib/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pg_freespacemap" to load this file. \quit
- 
- ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass,bigint);
- ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass);
--- 0 ----
diff --git a/contrib/pg_freespacemap/pg_freespacemap.c b/contrib/pg_freespacemap/pg_freespacemap.c
index f6f7d2e..e69de29 100644
*** a/contrib/pg_freespacemap/pg_freespacemap.c
--- b/contrib/pg_freespacemap/pg_freespacemap.c
***************
*** 1,44 ****
- /*-------------------------------------------------------------------------
-  *
-  * pg_freespacemap.c
-  *	  display contents of a free space map
-  *
-  *	  contrib/pg_freespacemap/pg_freespacemap.c
-  *-------------------------------------------------------------------------
-  */
- #include "postgres.h"
- 
- #include "funcapi.h"
- #include "storage/freespace.h"
- 
- 
- PG_MODULE_MAGIC;
- 
- Datum		pg_freespace(PG_FUNCTION_ARGS);
- 
- /*
-  * Returns the amount of free space on a given page, according to the
-  * free space map.
-  */
- PG_FUNCTION_INFO_V1(pg_freespace);
- 
- Datum
- pg_freespace(PG_FUNCTION_ARGS)
- {
- 	Oid			relid = PG_GETARG_OID(0);
- 	int64		blkno = PG_GETARG_INT64(1);
- 	int16		freespace;
- 	Relation	rel;
- 
- 	rel = relation_open(relid, AccessShareLock);
- 
- 	if (blkno < 0 || blkno > MaxBlockNumber)
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- 				 errmsg("invalid block number")));
- 
- 	freespace = GetRecordedFreeSpace(rel, blkno);
- 
- 	relation_close(rel, AccessShareLock);
- 	PG_RETURN_INT16(freespace);
- }
--- 0 ----
diff --git a/contrib/pg_freespacemap/pg_freespacemap.control b/contrib/pg_freespacemap/pg_freespacemap.control
index 34b695f..e69de29 100644
*** a/contrib/pg_freespacemap/pg_freespacemap.control
--- b/contrib/pg_freespacemap/pg_freespacemap.control
***************
*** 1,5 ****
- # pg_freespacemap extension
- comment = 'examine the free space map (FSM)'
- default_version = '1.0'
- module_pathname = '$libdir/pg_freespacemap'
- relocatable = true
--- 0 ----
diff --git a/contrib/pg_stat_statements/Makefile b/contrib/pg_stat_statements/Makefile
index e086fd8..e69de29 100644
*** a/contrib/pg_stat_statements/Makefile
--- b/contrib/pg_stat_statements/Makefile
***************
*** 1,18 ****
- # contrib/pg_stat_statements/Makefile
- 
- MODULE_big = pg_stat_statements
- OBJS = pg_stat_statements.o
- 
- EXTENSION = pg_stat_statements
- DATA = pg_stat_statements--1.0.sql pg_stat_statements--unpackaged--1.0.sql
- 
- ifdef USE_PGXS
- PG_CONFIG = pg_config
- PGXS := $(shell $(PG_CONFIG) --pgxs)
- include $(PGXS)
- else
- subdir = contrib/pg_stat_statements
- top_builddir = ../..
- include $(top_builddir)/src/Makefile.global
- include $(top_srcdir)/contrib/contrib-global.mk
- endif
--- 0 ----
diff --git a/contrib/pg_stat_statements/pg_stat_statements--1.0.sql b/contrib/pg_stat_statements/pg_stat_statements--1.0.sql
index 5294a01..e69de29 100644
*** a/contrib/pg_stat_statements/pg_stat_statements--1.0.sql
--- b/contrib/pg_stat_statements/pg_stat_statements--1.0.sql
***************
*** 1,39 ****
- /* contrib/pg_stat_statements/pg_stat_statements--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pg_stat_statements" to load this file. \quit
- 
- -- Register functions.
- CREATE FUNCTION pg_stat_statements_reset()
- RETURNS void
- AS 'MODULE_PATHNAME'
- LANGUAGE C;
- 
- CREATE FUNCTION pg_stat_statements(
-     OUT userid oid,
-     OUT dbid oid,
-     OUT query text,
-     OUT calls int8,
-     OUT total_time float8,
-     OUT rows int8,
-     OUT shared_blks_hit int8,
-     OUT shared_blks_read int8,
-     OUT shared_blks_written int8,
-     OUT local_blks_hit int8,
-     OUT local_blks_read int8,
-     OUT local_blks_written int8,
-     OUT temp_blks_read int8,
-     OUT temp_blks_written int8
- )
- RETURNS SETOF record
- AS 'MODULE_PATHNAME'
- LANGUAGE C;
- 
- -- Register a view on the function for ease of use.
- CREATE VIEW pg_stat_statements AS
-   SELECT * FROM pg_stat_statements();
- 
- GRANT SELECT ON pg_stat_statements TO PUBLIC;
- 
- -- Don't want this to be available to non-superusers.
- REVOKE ALL ON FUNCTION pg_stat_statements_reset() FROM PUBLIC;
--- 0 ----
diff --git a/contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql b/contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
index e84a3cb..e69de29 100644
*** a/contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
--- b/contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
***************
*** 1,8 ****
- /* contrib/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pg_stat_statements" to load this file. \quit
- 
- ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements_reset();
- ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements();
- ALTER EXTENSION pg_stat_statements ADD view pg_stat_statements;
--- 0 ----
diff --git a/contrib/pg_stat_statements/pg_stat_statements.c b/contrib/pg_stat_statements/pg_stat_statements.c
index 8dc3054..e69de29 100644
*** a/contrib/pg_stat_statements/pg_stat_statements.c
--- b/contrib/pg_stat_statements/pg_stat_statements.c
***************
*** 1,1042 ****
- /*-------------------------------------------------------------------------
-  *
-  * pg_stat_statements.c
-  *		Track statement execution times across a whole database cluster.
-  *
-  * Note about locking issues: to create or delete an entry in the shared
-  * hashtable, one must hold pgss->lock exclusively.  Modifying any field
-  * in an entry except the counters requires the same.  To look up an entry,
-  * one must hold the lock shared.  To read or update the counters within
-  * an entry, one must hold the lock shared or exclusive (so the entry doesn't
-  * disappear!) and also take the entry's mutex spinlock.
-  *
-  *
-  * Copyright (c) 2008-2011, PostgreSQL Global Development Group
-  *
-  * IDENTIFICATION
-  *	  contrib/pg_stat_statements/pg_stat_statements.c
-  *
-  *-------------------------------------------------------------------------
-  */
- #include "postgres.h"
- 
- #include <unistd.h>
- 
- #include "access/hash.h"
- #include "executor/instrument.h"
- #include "funcapi.h"
- #include "mb/pg_wchar.h"
- #include "miscadmin.h"
- #include "pgstat.h"
- #include "storage/fd.h"
- #include "storage/ipc.h"
- #include "storage/spin.h"
- #include "tcop/utility.h"
- #include "utils/builtins.h"
- 
- 
- PG_MODULE_MAGIC;
- 
- /* Location of stats file */
- #define PGSS_DUMP_FILE	"global/pg_stat_statements.stat"
- 
- /* This constant defines the magic number in the stats file header */
- static const uint32 PGSS_FILE_HEADER = 0x20100108;
- 
- /* XXX: Should USAGE_EXEC reflect execution time and/or buffer usage? */
- #define USAGE_EXEC(duration)	(1.0)
- #define USAGE_INIT				(1.0)	/* including initial planning */
- #define USAGE_DECREASE_FACTOR	(0.99)	/* decreased every entry_dealloc */
- #define USAGE_DEALLOC_PERCENT	5		/* free this % of entries at once */
- 
- /*
-  * Hashtable key that defines the identity of a hashtable entry.  The
-  * hash comparators do not assume that the query string is null-terminated;
-  * this lets us search for an mbcliplen'd string without copying it first.
-  *
-  * Presently, the query encoding is fully determined by the source database
-  * and so we don't really need it to be in the key.  But that might not always
-  * be true. Anyway it's notationally convenient to pass it as part of the key.
-  */
- typedef struct pgssHashKey
- {
- 	Oid			userid;			/* user OID */
- 	Oid			dbid;			/* database OID */
- 	int			encoding;		/* query encoding */
- 	int			query_len;		/* # of valid bytes in query string */
- 	const char *query_ptr;		/* query string proper */
- } pgssHashKey;
- 
- /*
-  * The actual stats counters kept within pgssEntry.
-  */
- typedef struct Counters
- {
- 	int64		calls;			/* # of times executed */
- 	double		total_time;		/* total execution time in seconds */
- 	int64		rows;			/* total # of retrieved or affected rows */
- 	int64		shared_blks_hit;	/* # of shared buffer hits */
- 	int64		shared_blks_read;		/* # of shared disk blocks read */
- 	int64		shared_blks_written;	/* # of shared disk blocks written */
- 	int64		local_blks_hit; /* # of local buffer hits */
- 	int64		local_blks_read;	/* # of local disk blocks read */
- 	int64		local_blks_written;		/* # of local disk blocks written */
- 	int64		temp_blks_read; /* # of temp blocks read */
- 	int64		temp_blks_written;		/* # of temp blocks written */
- 	double		usage;			/* usage factor */
- } Counters;
- 
- /*
-  * Statistics per statement
-  *
-  * NB: see the file read/write code before changing field order here.
-  */
- typedef struct pgssEntry
- {
- 	pgssHashKey key;			/* hash key of entry - MUST BE FIRST */
- 	Counters	counters;		/* the statistics for this query */
- 	slock_t		mutex;			/* protects the counters only */
- 	char		query[1];		/* VARIABLE LENGTH ARRAY - MUST BE LAST */
- 	/* Note: the allocated length of query[] is actually pgss->query_size */
- } pgssEntry;
- 
- /*
-  * Global shared state
-  */
- typedef struct pgssSharedState
- {
- 	LWLockId	lock;			/* protects hashtable search/modification */
- 	int			query_size;		/* max query length in bytes */
- } pgssSharedState;
- 
- /*---- Local variables ----*/
- 
- /* Current nesting depth of ExecutorRun calls */
- static int	nested_level = 0;
- 
- /* Saved hook values in case of unload */
- static shmem_startup_hook_type prev_shmem_startup_hook = NULL;
- static ExecutorStart_hook_type prev_ExecutorStart = NULL;
- static ExecutorRun_hook_type prev_ExecutorRun = NULL;
- static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
- static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
- static ProcessUtility_hook_type prev_ProcessUtility = NULL;
- 
- /* Links to shared memory state */
- static pgssSharedState *pgss = NULL;
- static HTAB *pgss_hash = NULL;
- 
- /*---- GUC variables ----*/
- 
- typedef enum
- {
- 	PGSS_TRACK_NONE,			/* track no statements */
- 	PGSS_TRACK_TOP,				/* only top level statements */
- 	PGSS_TRACK_ALL				/* all statements, including nested ones */
- }	PGSSTrackLevel;
- 
- static const struct config_enum_entry track_options[] =
- {
- 	{"none", PGSS_TRACK_NONE, false},
- 	{"top", PGSS_TRACK_TOP, false},
- 	{"all", PGSS_TRACK_ALL, false},
- 	{NULL, 0, false}
- };
- 
- static int	pgss_max;			/* max # statements to track */
- static int	pgss_track;			/* tracking level */
- static bool pgss_track_utility; /* whether to track utility commands */
- static bool pgss_save;			/* whether to save stats across shutdown */
- 
- 
- #define pgss_enabled() \
- 	(pgss_track == PGSS_TRACK_ALL || \
- 	(pgss_track == PGSS_TRACK_TOP && nested_level == 0))
- 
- /*---- Function declarations ----*/
- 
- void		_PG_init(void);
- void		_PG_fini(void);
- 
- Datum		pg_stat_statements_reset(PG_FUNCTION_ARGS);
- Datum		pg_stat_statements(PG_FUNCTION_ARGS);
- 
- PG_FUNCTION_INFO_V1(pg_stat_statements_reset);
- PG_FUNCTION_INFO_V1(pg_stat_statements);
- 
- static void pgss_shmem_startup(void);
- static void pgss_shmem_shutdown(int code, Datum arg);
- static void pgss_ExecutorStart(QueryDesc *queryDesc, int eflags);
- static void pgss_ExecutorRun(QueryDesc *queryDesc,
- 				 ScanDirection direction,
- 				 long count);
- static void pgss_ExecutorFinish(QueryDesc *queryDesc);
- static void pgss_ExecutorEnd(QueryDesc *queryDesc);
- static void pgss_ProcessUtility(Node *parsetree,
- 			  const char *queryString, ParamListInfo params, bool isTopLevel,
- 					DestReceiver *dest, char *completionTag);
- static uint32 pgss_hash_fn(const void *key, Size keysize);
- static int	pgss_match_fn(const void *key1, const void *key2, Size keysize);
- static void pgss_store(const char *query, double total_time, uint64 rows,
- 		   const BufferUsage *bufusage);
- static Size pgss_memsize(void);
- static pgssEntry *entry_alloc(pgssHashKey *key);
- static void entry_dealloc(void);
- static void entry_reset(void);
- 
- 
- /*
-  * Module load callback
-  */
- void
- _PG_init(void)
- {
- 	/*
- 	 * In order to create our shared memory area, we have to be loaded via
- 	 * shared_preload_libraries.  If not, fall out without hooking into any of
- 	 * the main system.  (We don't throw error here because it seems useful to
- 	 * allow the pg_stat_statements functions to be created even when the
- 	 * module isn't active.  The functions must protect themselves against
- 	 * being called then, however.)
- 	 */
- 	if (!process_shared_preload_libraries_in_progress)
- 		return;
- 
- 	/*
- 	 * Define (or redefine) custom GUC variables.
- 	 */
- 	DefineCustomIntVariable("pg_stat_statements.max",
- 	  "Sets the maximum number of statements tracked by pg_stat_statements.",
- 							NULL,
- 							&pgss_max,
- 							1000,
- 							100,
- 							INT_MAX,
- 							PGC_POSTMASTER,
- 							0,
- 							NULL,
- 							NULL,
- 							NULL);
- 
- 	DefineCustomEnumVariable("pg_stat_statements.track",
- 			   "Selects which statements are tracked by pg_stat_statements.",
- 							 NULL,
- 							 &pgss_track,
- 							 PGSS_TRACK_TOP,
- 							 track_options,
- 							 PGC_SUSET,
- 							 0,
- 							 NULL,
- 							 NULL,
- 							 NULL);
- 
- 	DefineCustomBoolVariable("pg_stat_statements.track_utility",
- 	   "Selects whether utility commands are tracked by pg_stat_statements.",
- 							 NULL,
- 							 &pgss_track_utility,
- 							 true,
- 							 PGC_SUSET,
- 							 0,
- 							 NULL,
- 							 NULL,
- 							 NULL);
- 
- 	DefineCustomBoolVariable("pg_stat_statements.save",
- 			   "Save pg_stat_statements statistics across server shutdowns.",
- 							 NULL,
- 							 &pgss_save,
- 							 true,
- 							 PGC_SIGHUP,
- 							 0,
- 							 NULL,
- 							 NULL,
- 							 NULL);
- 
- 	EmitWarningsOnPlaceholders("pg_stat_statements");
- 
- 	/*
- 	 * Request additional shared resources.  (These are no-ops if we're not in
- 	 * the postmaster process.)  We'll allocate or attach to the shared
- 	 * resources in pgss_shmem_startup().
- 	 */
- 	RequestAddinShmemSpace(pgss_memsize());
- 	RequestAddinLWLocks(1);
- 
- 	/*
- 	 * Install hooks.
- 	 */
- 	prev_shmem_startup_hook = shmem_startup_hook;
- 	shmem_startup_hook = pgss_shmem_startup;
- 	prev_ExecutorStart = ExecutorStart_hook;
- 	ExecutorStart_hook = pgss_ExecutorStart;
- 	prev_ExecutorRun = ExecutorRun_hook;
- 	ExecutorRun_hook = pgss_ExecutorRun;
- 	prev_ExecutorFinish = ExecutorFinish_hook;
- 	ExecutorFinish_hook = pgss_ExecutorFinish;
- 	prev_ExecutorEnd = ExecutorEnd_hook;
- 	ExecutorEnd_hook = pgss_ExecutorEnd;
- 	prev_ProcessUtility = ProcessUtility_hook;
- 	ProcessUtility_hook = pgss_ProcessUtility;
- }
- 
- /*
-  * Module unload callback
-  */
- void
- _PG_fini(void)
- {
- 	/* Uninstall hooks. */
- 	shmem_startup_hook = prev_shmem_startup_hook;
- 	ExecutorStart_hook = prev_ExecutorStart;
- 	ExecutorRun_hook = prev_ExecutorRun;
- 	ExecutorFinish_hook = prev_ExecutorFinish;
- 	ExecutorEnd_hook = prev_ExecutorEnd;
- 	ProcessUtility_hook = prev_ProcessUtility;
- }
- 
- /*
-  * shmem_startup hook: allocate or attach to shared memory,
-  * then load any pre-existing statistics from file.
-  */
- static void
- pgss_shmem_startup(void)
- {
- 	bool		found;
- 	HASHCTL		info;
- 	FILE	   *file;
- 	uint32		header;
- 	int32		num;
- 	int32		i;
- 	int			query_size;
- 	int			buffer_size;
- 	char	   *buffer = NULL;
- 
- 	if (prev_shmem_startup_hook)
- 		prev_shmem_startup_hook();
- 
- 	/* reset in case this is a restart within the postmaster */
- 	pgss = NULL;
- 	pgss_hash = NULL;
- 
- 	/*
- 	 * Create or attach to the shared memory state, including hash table
- 	 */
- 	LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
- 
- 	pgss = ShmemInitStruct("pg_stat_statements",
- 						   sizeof(pgssSharedState),
- 						   &found);
- 
- 	if (!found)
- 	{
- 		/* First time through ... */
- 		pgss->lock = LWLockAssign();
- 		pgss->query_size = pgstat_track_activity_query_size;
- 	}
- 
- 	/* Be sure everyone agrees on the hash table entry size */
- 	query_size = pgss->query_size;
- 
- 	memset(&info, 0, sizeof(info));
- 	info.keysize = sizeof(pgssHashKey);
- 	info.entrysize = offsetof(pgssEntry, query) +query_size;
- 	info.hash = pgss_hash_fn;
- 	info.match = pgss_match_fn;
- 	pgss_hash = ShmemInitHash("pg_stat_statements hash",
- 							  pgss_max, pgss_max,
- 							  &info,
- 							  HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
- 
- 	LWLockRelease(AddinShmemInitLock);
- 
- 	/*
- 	 * If we're in the postmaster (or a standalone backend...), set up a shmem
- 	 * exit hook to dump the statistics to disk.
- 	 */
- 	if (!IsUnderPostmaster)
- 		on_shmem_exit(pgss_shmem_shutdown, (Datum) 0);
- 
- 	/*
- 	 * Attempt to load old statistics from the dump file, if this is the first
- 	 * time through and we weren't told not to.
- 	 */
- 	if (found || !pgss_save)
- 		return;
- 
- 	/*
- 	 * Note: we don't bother with locks here, because there should be no other
- 	 * processes running when this code is reached.
- 	 */
- 	file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_R);
- 	if (file == NULL)
- 	{
- 		if (errno == ENOENT)
- 			return;				/* ignore not-found error */
- 		goto error;
- 	}
- 
- 	buffer_size = query_size;
- 	buffer = (char *) palloc(buffer_size);
- 
- 	if (fread(&header, sizeof(uint32), 1, file) != 1 ||
- 		header != PGSS_FILE_HEADER ||
- 		fread(&num, sizeof(int32), 1, file) != 1)
- 		goto error;
- 
- 	for (i = 0; i < num; i++)
- 	{
- 		pgssEntry	temp;
- 		pgssEntry  *entry;
- 
- 		if (fread(&temp, offsetof(pgssEntry, mutex), 1, file) != 1)
- 			goto error;
- 
- 		/* Encoding is the only field we can easily sanity-check */
- 		if (!PG_VALID_BE_ENCODING(temp.key.encoding))
- 			goto error;
- 
- 		/* Previous incarnation might have had a larger query_size */
- 		if (temp.key.query_len >= buffer_size)
- 		{
- 			buffer = (char *) repalloc(buffer, temp.key.query_len + 1);
- 			buffer_size = temp.key.query_len + 1;
- 		}
- 
- 		if (fread(buffer, 1, temp.key.query_len, file) != temp.key.query_len)
- 			goto error;
- 		buffer[temp.key.query_len] = '\0';
- 
- 		/* Clip to available length if needed */
- 		if (temp.key.query_len >= query_size)
- 			temp.key.query_len = pg_encoding_mbcliplen(temp.key.encoding,
- 													   buffer,
- 													   temp.key.query_len,
- 													   query_size - 1);
- 		temp.key.query_ptr = buffer;
- 
- 		/* make the hashtable entry (discards old entries if too many) */
- 		entry = entry_alloc(&temp.key);
- 
- 		/* copy in the actual stats */
- 		entry->counters = temp.counters;
- 	}
- 
- 	pfree(buffer);
- 	FreeFile(file);
- 	return;
- 
- error:
- 	ereport(LOG,
- 			(errcode_for_file_access(),
- 			 errmsg("could not read pg_stat_statement file \"%s\": %m",
- 					PGSS_DUMP_FILE)));
- 	if (buffer)
- 		pfree(buffer);
- 	if (file)
- 		FreeFile(file);
- 	/* If possible, throw away the bogus file; ignore any error */
- 	unlink(PGSS_DUMP_FILE);
- }
- 
- /*
-  * shmem_shutdown hook: Dump statistics into file.
-  *
-  * Note: we don't bother with acquiring lock, because there should be no
-  * other processes running when this is called.
-  */
- static void
- pgss_shmem_shutdown(int code, Datum arg)
- {
- 	FILE	   *file;
- 	HASH_SEQ_STATUS hash_seq;
- 	int32		num_entries;
- 	pgssEntry  *entry;
- 
- 	/* Don't try to dump during a crash. */
- 	if (code)
- 		return;
- 
- 	/* Safety check ... shouldn't get here unless shmem is set up. */
- 	if (!pgss || !pgss_hash)
- 		return;
- 
- 	/* Don't dump if told not to. */
- 	if (!pgss_save)
- 		return;
- 
- 	file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_W);
- 	if (file == NULL)
- 		goto error;
- 
- 	if (fwrite(&PGSS_FILE_HEADER, sizeof(uint32), 1, file) != 1)
- 		goto error;
- 	num_entries = hash_get_num_entries(pgss_hash);
- 	if (fwrite(&num_entries, sizeof(int32), 1, file) != 1)
- 		goto error;
- 
- 	hash_seq_init(&hash_seq, pgss_hash);
- 	while ((entry = hash_seq_search(&hash_seq)) != NULL)
- 	{
- 		int			len = entry->key.query_len;
- 
- 		if (fwrite(entry, offsetof(pgssEntry, mutex), 1, file) != 1 ||
- 			fwrite(entry->query, 1, len, file) != len)
- 			goto error;
- 	}
- 
- 	if (FreeFile(file))
- 	{
- 		file = NULL;
- 		goto error;
- 	}
- 
- 	return;
- 
- error:
- 	ereport(LOG,
- 			(errcode_for_file_access(),
- 			 errmsg("could not write pg_stat_statement file \"%s\": %m",
- 					PGSS_DUMP_FILE)));
- 	if (file)
- 		FreeFile(file);
- 	unlink(PGSS_DUMP_FILE);
- }
- 
- /*
-  * ExecutorStart hook: start up tracking if needed
-  */
- static void
- pgss_ExecutorStart(QueryDesc *queryDesc, int eflags)
- {
- 	if (prev_ExecutorStart)
- 		prev_ExecutorStart(queryDesc, eflags);
- 	else
- 		standard_ExecutorStart(queryDesc, eflags);
- 
- 	if (pgss_enabled())
- 	{
- 		/*
- 		 * Set up to track total elapsed time in ExecutorRun.  Make sure the
- 		 * space is allocated in the per-query context so it will go away at
- 		 * ExecutorEnd.
- 		 */
- 		if (queryDesc->totaltime == NULL)
- 		{
- 			MemoryContext oldcxt;
- 
- 			oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
- 			queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
- 			MemoryContextSwitchTo(oldcxt);
- 		}
- 	}
- }
- 
- /*
-  * ExecutorRun hook: all we need do is track nesting depth
-  */
- static void
- pgss_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
- {
- 	nested_level++;
- 	PG_TRY();
- 	{
- 		if (prev_ExecutorRun)
- 			prev_ExecutorRun(queryDesc, direction, count);
- 		else
- 			standard_ExecutorRun(queryDesc, direction, count);
- 		nested_level--;
- 	}
- 	PG_CATCH();
- 	{
- 		nested_level--;
- 		PG_RE_THROW();
- 	}
- 	PG_END_TRY();
- }
- 
- /*
-  * ExecutorFinish hook: all we need do is track nesting depth
-  */
- static void
- pgss_ExecutorFinish(QueryDesc *queryDesc)
- {
- 	nested_level++;
- 	PG_TRY();
- 	{
- 		if (prev_ExecutorFinish)
- 			prev_ExecutorFinish(queryDesc);
- 		else
- 			standard_ExecutorFinish(queryDesc);
- 		nested_level--;
- 	}
- 	PG_CATCH();
- 	{
- 		nested_level--;
- 		PG_RE_THROW();
- 	}
- 	PG_END_TRY();
- }
- 
- /*
-  * ExecutorEnd hook: store results if needed
-  */
- static void
- pgss_ExecutorEnd(QueryDesc *queryDesc)
- {
- 	if (queryDesc->totaltime && pgss_enabled())
- 	{
- 		/*
- 		 * Make sure stats accumulation is done.  (Note: it's okay if several
- 		 * levels of hook all do this.)
- 		 */
- 		InstrEndLoop(queryDesc->totaltime);
- 
- 		pgss_store(queryDesc->sourceText,
- 				   queryDesc->totaltime->total,
- 				   queryDesc->estate->es_processed,
- 				   &queryDesc->totaltime->bufusage);
- 	}
- 
- 	if (prev_ExecutorEnd)
- 		prev_ExecutorEnd(queryDesc);
- 	else
- 		standard_ExecutorEnd(queryDesc);
- }
- 
- /*
-  * ProcessUtility hook
-  */
- static void
- pgss_ProcessUtility(Node *parsetree, const char *queryString,
- 					ParamListInfo params, bool isTopLevel,
- 					DestReceiver *dest, char *completionTag)
- {
- 	if (pgss_track_utility && pgss_enabled())
- 	{
- 		instr_time	start;
- 		instr_time	duration;
- 		uint64		rows = 0;
- 		BufferUsage bufusage;
- 
- 		bufusage = pgBufferUsage;
- 		INSTR_TIME_SET_CURRENT(start);
- 
- 		nested_level++;
- 		PG_TRY();
- 		{
- 			if (prev_ProcessUtility)
- 				prev_ProcessUtility(parsetree, queryString, params,
- 									isTopLevel, dest, completionTag);
- 			else
- 				standard_ProcessUtility(parsetree, queryString, params,
- 										isTopLevel, dest, completionTag);
- 			nested_level--;
- 		}
- 		PG_CATCH();
- 		{
- 			nested_level--;
- 			PG_RE_THROW();
- 		}
- 		PG_END_TRY();
- 
- 		INSTR_TIME_SET_CURRENT(duration);
- 		INSTR_TIME_SUBTRACT(duration, start);
- 
- 		/* parse command tag to retrieve the number of affected rows. */
- 		if (completionTag &&
- 			sscanf(completionTag, "COPY " UINT64_FORMAT, &rows) != 1)
- 			rows = 0;
- 
- 		/* calc differences of buffer counters. */
- 		bufusage.shared_blks_hit =
- 			pgBufferUsage.shared_blks_hit - bufusage.shared_blks_hit;
- 		bufusage.shared_blks_read =
- 			pgBufferUsage.shared_blks_read - bufusage.shared_blks_read;
- 		bufusage.shared_blks_written =
- 			pgBufferUsage.shared_blks_written - bufusage.shared_blks_written;
- 		bufusage.local_blks_hit =
- 			pgBufferUsage.local_blks_hit - bufusage.local_blks_hit;
- 		bufusage.local_blks_read =
- 			pgBufferUsage.local_blks_read - bufusage.local_blks_read;
- 		bufusage.local_blks_written =
- 			pgBufferUsage.local_blks_written - bufusage.local_blks_written;
- 		bufusage.temp_blks_read =
- 			pgBufferUsage.temp_blks_read - bufusage.temp_blks_read;
- 		bufusage.temp_blks_written =
- 			pgBufferUsage.temp_blks_written - bufusage.temp_blks_written;
- 
- 		pgss_store(queryString, INSTR_TIME_GET_DOUBLE(duration), rows,
- 				   &bufusage);
- 	}
- 	else
- 	{
- 		if (prev_ProcessUtility)
- 			prev_ProcessUtility(parsetree, queryString, params,
- 								isTopLevel, dest, completionTag);
- 		else
- 			standard_ProcessUtility(parsetree, queryString, params,
- 									isTopLevel, dest, completionTag);
- 	}
- }
- 
- /*
-  * Calculate hash value for a key
-  */
- static uint32
- pgss_hash_fn(const void *key, Size keysize)
- {
- 	const pgssHashKey *k = (const pgssHashKey *) key;
- 
- 	/* we don't bother to include encoding in the hash */
- 	return hash_uint32((uint32) k->userid) ^
- 		hash_uint32((uint32) k->dbid) ^
- 		DatumGetUInt32(hash_any((const unsigned char *) k->query_ptr,
- 								k->query_len));
- }
- 
- /*
-  * Compare two keys - zero means match
-  */
- static int
- pgss_match_fn(const void *key1, const void *key2, Size keysize)
- {
- 	const pgssHashKey *k1 = (const pgssHashKey *) key1;
- 	const pgssHashKey *k2 = (const pgssHashKey *) key2;
- 
- 	if (k1->userid == k2->userid &&
- 		k1->dbid == k2->dbid &&
- 		k1->encoding == k2->encoding &&
- 		k1->query_len == k2->query_len &&
- 		memcmp(k1->query_ptr, k2->query_ptr, k1->query_len) == 0)
- 		return 0;
- 	else
- 		return 1;
- }
- 
- /*
-  * Store some statistics for a statement.
-  */
- static void
- pgss_store(const char *query, double total_time, uint64 rows,
- 		   const BufferUsage *bufusage)
- {
- 	pgssHashKey key;
- 	double		usage;
- 	pgssEntry  *entry;
- 
- 	Assert(query != NULL);
- 
- 	/* Safety check... */
- 	if (!pgss || !pgss_hash)
- 		return;
- 
- 	/* Set up key for hashtable search */
- 	key.userid = GetUserId();
- 	key.dbid = MyDatabaseId;
- 	key.encoding = GetDatabaseEncoding();
- 	key.query_len = strlen(query);
- 	if (key.query_len >= pgss->query_size)
- 		key.query_len = pg_encoding_mbcliplen(key.encoding,
- 											  query,
- 											  key.query_len,
- 											  pgss->query_size - 1);
- 	key.query_ptr = query;
- 
- 	usage = USAGE_EXEC(duration);
- 
- 	/* Lookup the hash table entry with shared lock. */
- 	LWLockAcquire(pgss->lock, LW_SHARED);
- 
- 	entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND, NULL);
- 	if (!entry)
- 	{
- 		/* Must acquire exclusive lock to add a new entry. */
- 		LWLockRelease(pgss->lock);
- 		LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
- 		entry = entry_alloc(&key);
- 	}
- 
- 	/* Grab the spinlock while updating the counters. */
- 	{
- 		volatile pgssEntry *e = (volatile pgssEntry *) entry;
- 
- 		SpinLockAcquire(&e->mutex);
- 		e->counters.calls += 1;
- 		e->counters.total_time += total_time;
- 		e->counters.rows += rows;
- 		e->counters.shared_blks_hit += bufusage->shared_blks_hit;
- 		e->counters.shared_blks_read += bufusage->shared_blks_read;
- 		e->counters.shared_blks_written += bufusage->shared_blks_written;
- 		e->counters.local_blks_hit += bufusage->local_blks_hit;
- 		e->counters.local_blks_read += bufusage->local_blks_read;
- 		e->counters.local_blks_written += bufusage->local_blks_written;
- 		e->counters.temp_blks_read += bufusage->temp_blks_read;
- 		e->counters.temp_blks_written += bufusage->temp_blks_written;
- 		e->counters.usage += usage;
- 		SpinLockRelease(&e->mutex);
- 	}
- 
- 	LWLockRelease(pgss->lock);
- }
- 
- /*
-  * Reset all statement statistics.
-  */
- Datum
- pg_stat_statements_reset(PG_FUNCTION_ARGS)
- {
- 	if (!pgss || !pgss_hash)
- 		ereport(ERROR,
- 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- 				 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
- 	entry_reset();
- 	PG_RETURN_VOID();
- }
- 
- #define PG_STAT_STATEMENTS_COLS		14
- 
- /*
-  * Retrieve statement statistics.
-  */
- Datum
- pg_stat_statements(PG_FUNCTION_ARGS)
- {
- 	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
- 	TupleDesc	tupdesc;
- 	Tuplestorestate *tupstore;
- 	MemoryContext per_query_ctx;
- 	MemoryContext oldcontext;
- 	Oid			userid = GetUserId();
- 	bool		is_superuser = superuser();
- 	HASH_SEQ_STATUS hash_seq;
- 	pgssEntry  *entry;
- 
- 	if (!pgss || !pgss_hash)
- 		ereport(ERROR,
- 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- 				 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
- 
- 	/* check to see if caller supports us returning a tuplestore */
- 	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
- 		ereport(ERROR,
- 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- 				 errmsg("set-valued function called in context that cannot accept a set")));
- 	if (!(rsinfo->allowedModes & SFRM_Materialize))
- 		ereport(ERROR,
- 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- 				 errmsg("materialize mode required, but it is not " \
- 						"allowed in this context")));
- 
- 	/* Build a tuple descriptor for our result type */
- 	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
- 		elog(ERROR, "return type must be a row type");
- 
- 	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
- 	oldcontext = MemoryContextSwitchTo(per_query_ctx);
- 
- 	tupstore = tuplestore_begin_heap(true, false, work_mem);
- 	rsinfo->returnMode = SFRM_Materialize;
- 	rsinfo->setResult = tupstore;
- 	rsinfo->setDesc = tupdesc;
- 
- 	MemoryContextSwitchTo(oldcontext);
- 
- 	LWLockAcquire(pgss->lock, LW_SHARED);
- 
- 	hash_seq_init(&hash_seq, pgss_hash);
- 	while ((entry = hash_seq_search(&hash_seq)) != NULL)
- 	{
- 		Datum		values[PG_STAT_STATEMENTS_COLS];
- 		bool		nulls[PG_STAT_STATEMENTS_COLS];
- 		int			i = 0;
- 		Counters	tmp;
- 
- 		memset(values, 0, sizeof(values));
- 		memset(nulls, 0, sizeof(nulls));
- 
- 		values[i++] = ObjectIdGetDatum(entry->key.userid);
- 		values[i++] = ObjectIdGetDatum(entry->key.dbid);
- 
- 		if (is_superuser || entry->key.userid == userid)
- 		{
- 			char	   *qstr;
- 
- 			qstr = (char *)
- 				pg_do_encoding_conversion((unsigned char *) entry->query,
- 										  entry->key.query_len,
- 										  entry->key.encoding,
- 										  GetDatabaseEncoding());
- 			values[i++] = CStringGetTextDatum(qstr);
- 			if (qstr != entry->query)
- 				pfree(qstr);
- 		}
- 		else
- 			values[i++] = CStringGetTextDatum("<insufficient privilege>");
- 
- 		/* copy counters to a local variable to keep locking time short */
- 		{
- 			volatile pgssEntry *e = (volatile pgssEntry *) entry;
- 
- 			SpinLockAcquire(&e->mutex);
- 			tmp = e->counters;
- 			SpinLockRelease(&e->mutex);
- 		}
- 
- 		values[i++] = Int64GetDatumFast(tmp.calls);
- 		values[i++] = Float8GetDatumFast(tmp.total_time);
- 		values[i++] = Int64GetDatumFast(tmp.rows);
- 		values[i++] = Int64GetDatumFast(tmp.shared_blks_hit);
- 		values[i++] = Int64GetDatumFast(tmp.shared_blks_read);
- 		values[i++] = Int64GetDatumFast(tmp.shared_blks_written);
- 		values[i++] = Int64GetDatumFast(tmp.local_blks_hit);
- 		values[i++] = Int64GetDatumFast(tmp.local_blks_read);
- 		values[i++] = Int64GetDatumFast(tmp.local_blks_written);
- 		values[i++] = Int64GetDatumFast(tmp.temp_blks_read);
- 		values[i++] = Int64GetDatumFast(tmp.temp_blks_written);
- 
- 		Assert(i == PG_STAT_STATEMENTS_COLS);
- 
- 		tuplestore_putvalues(tupstore, tupdesc, values, nulls);
- 	}
- 
- 	LWLockRelease(pgss->lock);
- 
- 	/* clean up and return the tuplestore */
- 	tuplestore_donestoring(tupstore);
- 
- 	return (Datum) 0;
- }
- 
- /*
-  * Estimate shared memory space needed.
-  */
- static Size
- pgss_memsize(void)
- {
- 	Size		size;
- 	Size		entrysize;
- 
- 	size = MAXALIGN(sizeof(pgssSharedState));
- 	entrysize = offsetof(pgssEntry, query) +pgstat_track_activity_query_size;
- 	size = add_size(size, hash_estimate_size(pgss_max, entrysize));
- 
- 	return size;
- }
- 
- /*
-  * Allocate a new hashtable entry.
-  * caller must hold an exclusive lock on pgss->lock
-  *
-  * Note: despite needing exclusive lock, it's not an error for the target
-  * entry to already exist.	This is because pgss_store releases and
-  * reacquires lock after failing to find a match; so someone else could
-  * have made the entry while we waited to get exclusive lock.
-  */
- static pgssEntry *
- entry_alloc(pgssHashKey *key)
- {
- 	pgssEntry  *entry;
- 	bool		found;
- 
- 	/* Caller must have clipped query properly */
- 	Assert(key->query_len < pgss->query_size);
- 
- 	/* Make space if needed */
- 	while (hash_get_num_entries(pgss_hash) >= pgss_max)
- 		entry_dealloc();
- 
- 	/* Find or create an entry with desired hash code */
- 	entry = (pgssEntry *) hash_search(pgss_hash, key, HASH_ENTER, &found);
- 
- 	if (!found)
- 	{
- 		/* New entry, initialize it */
- 
- 		/* dynahash tried to copy the key for us, but must fix query_ptr */
- 		entry->key.query_ptr = entry->query;
- 		/* reset the statistics */
- 		memset(&entry->counters, 0, sizeof(Counters));
- 		entry->counters.usage = USAGE_INIT;
- 		/* re-initialize the mutex each time ... we assume no one using it */
- 		SpinLockInit(&entry->mutex);
- 		/* ... and don't forget the query text */
- 		memcpy(entry->query, key->query_ptr, key->query_len);
- 		entry->query[key->query_len] = '\0';
- 	}
- 
- 	return entry;
- }
- 
- /*
-  * qsort comparator for sorting into increasing usage order
-  */
- static int
- entry_cmp(const void *lhs, const void *rhs)
- {
- 	double		l_usage = (*(pgssEntry * const *) lhs)->counters.usage;
- 	double		r_usage = (*(pgssEntry * const *) rhs)->counters.usage;
- 
- 	if (l_usage < r_usage)
- 		return -1;
- 	else if (l_usage > r_usage)
- 		return +1;
- 	else
- 		return 0;
- }
- 
- /*
-  * Deallocate least used entries.
-  * Caller must hold an exclusive lock on pgss->lock.
-  */
- static void
- entry_dealloc(void)
- {
- 	HASH_SEQ_STATUS hash_seq;
- 	pgssEntry **entries;
- 	pgssEntry  *entry;
- 	int			nvictims;
- 	int			i;
- 
- 	/* Sort entries by usage and deallocate USAGE_DEALLOC_PERCENT of them. */
- 
- 	entries = palloc(hash_get_num_entries(pgss_hash) * sizeof(pgssEntry *));
- 
- 	i = 0;
- 	hash_seq_init(&hash_seq, pgss_hash);
- 	while ((entry = hash_seq_search(&hash_seq)) != NULL)
- 	{
- 		entries[i++] = entry;
- 		entry->counters.usage *= USAGE_DECREASE_FACTOR;
- 	}
- 
- 	qsort(entries, i, sizeof(pgssEntry *), entry_cmp);
- 	nvictims = Max(10, i * USAGE_DEALLOC_PERCENT / 100);
- 	nvictims = Min(nvictims, i);
- 
- 	for (i = 0; i < nvictims; i++)
- 	{
- 		hash_search(pgss_hash, &entries[i]->key, HASH_REMOVE, NULL);
- 	}
- 
- 	pfree(entries);
- }
- 
- /*
-  * Release all entries.
-  */
- static void
- entry_reset(void)
- {
- 	HASH_SEQ_STATUS hash_seq;
- 	pgssEntry  *entry;
- 
- 	LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
- 
- 	hash_seq_init(&hash_seq, pgss_hash);
- 	while ((entry = hash_seq_search(&hash_seq)) != NULL)
- 	{
- 		hash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);
- 	}
- 
- 	LWLockRelease(pgss->lock);
- }
--- 0 ----
diff --git a/contrib/pg_stat_statements/pg_stat_statements.control b/contrib/pg_stat_statements/pg_stat_statements.control
index 6f9a947..e69de29 100644
*** a/contrib/pg_stat_statements/pg_stat_statements.control
--- b/contrib/pg_stat_statements/pg_stat_statements.control
***************
*** 1,5 ****
- # pg_stat_statements extension
- comment = 'track execution statistics of all SQL statements executed'
- default_version = '1.0'
- module_pathname = '$libdir/pg_stat_statements'
- relocatable = true
--- 0 ----
diff --git a/contrib/pgrowlocks/Makefile b/contrib/pgrowlocks/Makefile
index f56389b..e69de29 100644
*** a/contrib/pgrowlocks/Makefile
--- b/contrib/pgrowlocks/Makefile
***************
*** 1,18 ****
- # contrib/pgrowlocks/Makefile
- 
- MODULE_big	= pgrowlocks
- OBJS		= pgrowlocks.o
- 
- EXTENSION = pgrowlocks
- DATA = pgrowlocks--1.0.sql pgrowlocks--unpackaged--1.0.sql
- 
- ifdef USE_PGXS
- PG_CONFIG = pg_config
- PGXS := $(shell $(PG_CONFIG) --pgxs)
- include $(PGXS)
- else
- subdir = contrib/pgrowlocks
- top_builddir = ../..
- include $(top_builddir)/src/Makefile.global
- include $(top_srcdir)/contrib/contrib-global.mk
- endif
--- 0 ----
diff --git a/contrib/pgrowlocks/pgrowlocks--1.0.sql b/contrib/pgrowlocks/pgrowlocks--1.0.sql
index a909b74..e69de29 100644
*** a/contrib/pgrowlocks/pgrowlocks--1.0.sql
--- b/contrib/pgrowlocks/pgrowlocks--1.0.sql
***************
*** 1,15 ****
- /* contrib/pgrowlocks/pgrowlocks--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pgrowlocks" to load this file. \quit
- 
- CREATE FUNCTION pgrowlocks(IN relname text,
-     OUT locked_row TID,		-- row TID
-     OUT lock_type TEXT,		-- lock type
-     OUT locker XID,		-- locking XID
-     OUT multi bool,		-- multi XID?
-     OUT xids xid[],		-- multi XIDs
-     OUT pids INTEGER[])		-- locker's process id
- RETURNS SETOF record
- AS 'MODULE_PATHNAME', 'pgrowlocks'
- LANGUAGE C STRICT;
--- 0 ----
diff --git a/contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql b/contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
index b8c3faf..e69de29 100644
*** a/contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
--- b/contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
***************
*** 1,6 ****
- /* contrib/pgrowlocks/pgrowlocks--unpackaged--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pgrowlocks" to load this file. \quit
- 
- ALTER EXTENSION pgrowlocks ADD function pgrowlocks(text);
--- 0 ----
diff --git a/contrib/pgrowlocks/pgrowlocks.c b/contrib/pgrowlocks/pgrowlocks.c
index 20beed2..e69de29 100644
*** a/contrib/pgrowlocks/pgrowlocks.c
--- b/contrib/pgrowlocks/pgrowlocks.c
***************
*** 1,220 ****
- /*
-  * contrib/pgrowlocks/pgrowlocks.c
-  *
-  * Copyright (c) 2005-2006	Tatsuo Ishii
-  *
-  * Permission to use, copy, modify, and distribute this software and
-  * its documentation for any purpose, without fee, and without a
-  * written agreement is hereby granted, provided that the above
-  * copyright notice and this paragraph and the following two
-  * paragraphs appear in all copies.
-  *
-  * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
-  * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
-  * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
-  * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
-  * OF THE POSSIBILITY OF SUCH DAMAGE.
-  *
-  * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
-  * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-  * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
-  * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
-  * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
-  */
- 
- #include "postgres.h"
- 
- #include "access/multixact.h"
- #include "access/relscan.h"
- #include "access/xact.h"
- #include "catalog/namespace.h"
- #include "funcapi.h"
- #include "miscadmin.h"
- #include "storage/bufmgr.h"
- #include "storage/procarray.h"
- #include "utils/acl.h"
- #include "utils/builtins.h"
- #include "utils/rel.h"
- #include "utils/tqual.h"
- 
- 
- PG_MODULE_MAGIC;
- 
- PG_FUNCTION_INFO_V1(pgrowlocks);
- 
- extern Datum pgrowlocks(PG_FUNCTION_ARGS);
- 
- /* ----------
-  * pgrowlocks:
-  * returns tids of rows being locked
-  * ----------
-  */
- 
- #define NCHARS 32
- 
- typedef struct
- {
- 	Relation	rel;
- 	HeapScanDesc scan;
- 	int			ncolumns;
- } MyData;
- 
- Datum
- pgrowlocks(PG_FUNCTION_ARGS)
- {
- 	FuncCallContext *funcctx;
- 	HeapScanDesc scan;
- 	HeapTuple	tuple;
- 	TupleDesc	tupdesc;
- 	AttInMetadata *attinmeta;
- 	Datum		result;
- 	MyData	   *mydata;
- 	Relation	rel;
- 
- 	if (SRF_IS_FIRSTCALL())
- 	{
- 		text	   *relname;
- 		RangeVar   *relrv;
- 		MemoryContext oldcontext;
- 		AclResult	aclresult;
- 
- 		funcctx = SRF_FIRSTCALL_INIT();
- 		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
- 
- 		/* Build a tuple descriptor for our result type */
- 		if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
- 			elog(ERROR, "return type must be a row type");
- 
- 		attinmeta = TupleDescGetAttInMetadata(tupdesc);
- 		funcctx->attinmeta = attinmeta;
- 
- 		relname = PG_GETARG_TEXT_P(0);
- 		relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
- 		rel = heap_openrv(relrv, AccessShareLock);
- 
- 		/* check permissions: must have SELECT on table */
- 		aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(),
- 									  ACL_SELECT);
- 		if (aclresult != ACLCHECK_OK)
- 			aclcheck_error(aclresult, ACL_KIND_CLASS,
- 						   RelationGetRelationName(rel));
- 
- 		scan = heap_beginscan(rel, SnapshotNow, 0, NULL);
- 		mydata = palloc(sizeof(*mydata));
- 		mydata->rel = rel;
- 		mydata->scan = scan;
- 		mydata->ncolumns = tupdesc->natts;
- 		funcctx->user_fctx = mydata;
- 
- 		MemoryContextSwitchTo(oldcontext);
- 	}
- 
- 	funcctx = SRF_PERCALL_SETUP();
- 	attinmeta = funcctx->attinmeta;
- 	mydata = (MyData *) funcctx->user_fctx;
- 	scan = mydata->scan;
- 
- 	/* scan the relation */
- 	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
- 	{
- 		/* must hold a buffer lock to call HeapTupleSatisfiesUpdate */
- 		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
- 
- 		if (HeapTupleSatisfiesUpdate(tuple->t_data,
- 									 GetCurrentCommandId(false),
- 									 scan->rs_cbuf) == HeapTupleBeingUpdated)
- 		{
- 
- 			char	  **values;
- 			int			i;
- 
- 			values = (char **) palloc(mydata->ncolumns * sizeof(char *));
- 
- 			i = 0;
- 			values[i++] = (char *) DirectFunctionCall1(tidout, PointerGetDatum(&tuple->t_self));
- 
- 			if (tuple->t_data->t_infomask & HEAP_XMAX_SHARED_LOCK)
- 				values[i++] = pstrdup("Shared");
- 			else
- 				values[i++] = pstrdup("Exclusive");
- 			values[i] = palloc(NCHARS * sizeof(char));
- 			snprintf(values[i++], NCHARS, "%d", HeapTupleHeaderGetXmax(tuple->t_data));
- 			if (tuple->t_data->t_infomask & HEAP_XMAX_IS_MULTI)
- 			{
- 				TransactionId *xids;
- 				int			nxids;
- 				int			j;
- 				int			isValidXid = 0;		/* any valid xid ever exists? */
- 
- 				values[i++] = pstrdup("true");
- 				nxids = GetMultiXactIdMembers(HeapTupleHeaderGetXmax(tuple->t_data), &xids);
- 				if (nxids == -1)
- 				{
- 					elog(ERROR, "GetMultiXactIdMembers returns error");
- 				}
- 
- 				values[i] = palloc(NCHARS * nxids);
- 				values[i + 1] = palloc(NCHARS * nxids);
- 				strcpy(values[i], "{");
- 				strcpy(values[i + 1], "{");
- 
- 				for (j = 0; j < nxids; j++)
- 				{
- 					char		buf[NCHARS];
- 
- 					if (TransactionIdIsInProgress(xids[j]))
- 					{
- 						if (isValidXid)
- 						{
- 							strcat(values[i], ",");
- 							strcat(values[i + 1], ",");
- 						}
- 						snprintf(buf, NCHARS, "%d", xids[j]);
- 						strcat(values[i], buf);
- 						snprintf(buf, NCHARS, "%d", BackendXidGetPid(xids[j]));
- 						strcat(values[i + 1], buf);
- 
- 						isValidXid = 1;
- 					}
- 				}
- 
- 				strcat(values[i], "}");
- 				strcat(values[i + 1], "}");
- 				i++;
- 			}
- 			else
- 			{
- 				values[i++] = pstrdup("false");
- 				values[i] = palloc(NCHARS * sizeof(char));
- 				snprintf(values[i++], NCHARS, "{%d}", HeapTupleHeaderGetXmax(tuple->t_data));
- 
- 				values[i] = palloc(NCHARS * sizeof(char));
- 				snprintf(values[i++], NCHARS, "{%d}", BackendXidGetPid(HeapTupleHeaderGetXmax(tuple->t_data)));
- 			}
- 
- 			LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
- 
- 			/* build a tuple */
- 			tuple = BuildTupleFromCStrings(attinmeta, values);
- 
- 			/* make the tuple into a datum */
- 			result = HeapTupleGetDatum(tuple);
- 
- 			/* Clean up */
- 			for (i = 0; i < mydata->ncolumns; i++)
- 				pfree(values[i]);
- 			pfree(values);
- 
- 			SRF_RETURN_NEXT(funcctx, result);
- 		}
- 		else
- 		{
- 			LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
- 		}
- 	}
- 
- 	heap_endscan(scan);
- 	heap_close(mydata->rel, AccessShareLock);
- 
- 	SRF_RETURN_DONE(funcctx);
- }
--- 0 ----
diff --git a/contrib/pgrowlocks/pgrowlocks.control b/contrib/pgrowlocks/pgrowlocks.control
index a6ba164..e69de29 100644
*** a/contrib/pgrowlocks/pgrowlocks.control
--- b/contrib/pgrowlocks/pgrowlocks.control
***************
*** 1,5 ****
- # pgrowlocks extension
- comment = 'show row-level locking information'
- default_version = '1.0'
- module_pathname = '$libdir/pgrowlocks'
- relocatable = true
--- 0 ----
diff --git a/contrib/pgstattuple/.gitignore b/contrib/pgstattuple/.gitignore
index 5dcb3ff..e69de29 100644
*** a/contrib/pgstattuple/.gitignore
--- b/contrib/pgstattuple/.gitignore
***************
*** 1,4 ****
- # Generated subdirectories
- /log/
- /results/
- /tmp_check/
--- 0 ----
diff --git a/contrib/pgstattuple/Makefile b/contrib/pgstattuple/Makefile
index 6ac2775..e69de29 100644
*** a/contrib/pgstattuple/Makefile
--- b/contrib/pgstattuple/Makefile
***************
*** 1,20 ****
- # contrib/pgstattuple/Makefile
- 
- MODULE_big	= pgstattuple
- OBJS		= pgstattuple.o pgstatindex.o
- 
- EXTENSION = pgstattuple
- DATA = pgstattuple--1.0.sql pgstattuple--unpackaged--1.0.sql
- 
- REGRESS = pgstattuple
- 
- ifdef USE_PGXS
- PG_CONFIG = pg_config
- PGXS := $(shell $(PG_CONFIG) --pgxs)
- include $(PGXS)
- else
- subdir = contrib/pgstattuple
- top_builddir = ../..
- include $(top_builddir)/src/Makefile.global
- include $(top_srcdir)/contrib/contrib-global.mk
- endif
--- 0 ----
diff --git a/contrib/pgstattuple/expected/pgstattuple.out b/contrib/pgstattuple/expected/pgstattuple.out
index 7f28177..e69de29 100644
*** a/contrib/pgstattuple/expected/pgstattuple.out
--- b/contrib/pgstattuple/expected/pgstattuple.out
***************
*** 1,38 ****
- CREATE EXTENSION pgstattuple;
- --
- -- It's difficult to come up with platform-independent test cases for
- -- the pgstattuple functions, but the results for empty tables and
- -- indexes should be that.
- --
- create table test (a int primary key);
- NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "test_pkey" for table "test"
- select * from pgstattuple('test'::text);
-  table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent 
- -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------
-          0 |           0 |         0 |             0 |                0 |              0 |                  0 |          0 |            0
- (1 row)
- 
- select * from pgstattuple('test'::regclass);
-  table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent 
- -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------
-          0 |           0 |         0 |             0 |                0 |              0 |                  0 |          0 |            0
- (1 row)
- 
- select * from pgstatindex('test_pkey');
-  version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation 
- ---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
-        2 |          0 |          0 |             0 |              0 |          0 |           0 |             0 |              NaN |                NaN
- (1 row)
- 
- select pg_relpages('test');
-  pg_relpages 
- -------------
-            0
- (1 row)
- 
- select pg_relpages('test_pkey');
-  pg_relpages 
- -------------
-            1
- (1 row)
- 
--- 0 ----
diff --git a/contrib/pgstattuple/pgstatindex.c b/contrib/pgstattuple/pgstatindex.c
index beff1b9..e69de29 100644
*** a/contrib/pgstattuple/pgstatindex.c
--- b/contrib/pgstattuple/pgstatindex.c
***************
*** 1,293 ****
- /*
-  * contrib/pgstattuple/pgstatindex.c
-  *
-  *
-  * pgstatindex
-  *
-  * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
-  *
-  * Permission to use, copy, modify, and distribute this software and
-  * its documentation for any purpose, without fee, and without a
-  * written agreement is hereby granted, provided that the above
-  * copyright notice and this paragraph and the following two
-  * paragraphs appear in all copies.
-  *
-  * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
-  * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
-  * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
-  * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
-  * OF THE POSSIBILITY OF SUCH DAMAGE.
-  *
-  * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
-  * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-  * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
-  * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
-  * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
-  */
- 
- #include "postgres.h"
- 
- #include "access/heapam.h"
- #include "access/nbtree.h"
- #include "catalog/namespace.h"
- #include "funcapi.h"
- #include "miscadmin.h"
- #include "storage/bufmgr.h"
- #include "utils/builtins.h"
- #include "utils/rel.h"
- 
- 
- extern Datum pgstatindex(PG_FUNCTION_ARGS);
- extern Datum pg_relpages(PG_FUNCTION_ARGS);
- 
- PG_FUNCTION_INFO_V1(pgstatindex);
- PG_FUNCTION_INFO_V1(pg_relpages);
- 
- #define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
- #define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
- 
- #define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
- 		if ( !(FirstOffsetNumber <= (offnum) && \
- 						(offnum) <= PageGetMaxOffsetNumber(pg)) ) \
- 			 elog(ERROR, "page offset number out of range"); }
- 
- /* note: BlockNumber is unsigned, hence can't be negative */
- #define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
- 		if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
- 			 elog(ERROR, "block number out of range"); }
- 
- /* ------------------------------------------------
-  * A structure for a whole btree index statistics
-  * used by pgstatindex().
-  * ------------------------------------------------
-  */
- typedef struct BTIndexStat
- {
- 	uint32		version;
- 	uint32		level;
- 	BlockNumber root_blkno;
- 
- 	uint64		root_pages;
- 	uint64		internal_pages;
- 	uint64		leaf_pages;
- 	uint64		empty_pages;
- 	uint64		deleted_pages;
- 
- 	uint64		max_avail;
- 	uint64		free_space;
- 
- 	uint64		fragments;
- } BTIndexStat;
- 
- /* ------------------------------------------------------
-  * pgstatindex()
-  *
-  * Usage: SELECT * FROM pgstatindex('t1_pkey');
-  * ------------------------------------------------------
-  */
- Datum
- pgstatindex(PG_FUNCTION_ARGS)
- {
- 	text	   *relname = PG_GETARG_TEXT_P(0);
- 	Relation	rel;
- 	RangeVar   *relrv;
- 	Datum		result;
- 	BlockNumber nblocks;
- 	BlockNumber blkno;
- 	BTIndexStat indexStat;
- 
- 	if (!superuser())
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- 				 (errmsg("must be superuser to use pgstattuple functions"))));
- 
- 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
- 	rel = relation_openrv(relrv, AccessShareLock);
- 
- 	if (!IS_INDEX(rel) || !IS_BTREE(rel))
- 		elog(ERROR, "relation \"%s\" is not a btree index",
- 			 RelationGetRelationName(rel));
- 
- 	/*
- 	 * Reject attempts to read non-local temporary relations; we would be
- 	 * likely to get wrong data since we have no visibility into the owning
- 	 * session's local buffers.
- 	 */
- 	if (RELATION_IS_OTHER_TEMP(rel))
- 		ereport(ERROR,
- 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- 				 errmsg("cannot access temporary tables of other sessions")));
- 
- 	/*
- 	 * Read metapage
- 	 */
- 	{
- 		Buffer		buffer = ReadBuffer(rel, 0);
- 		Page		page = BufferGetPage(buffer);
- 		BTMetaPageData *metad = BTPageGetMeta(page);
- 
- 		indexStat.version = metad->btm_version;
- 		indexStat.level = metad->btm_level;
- 		indexStat.root_blkno = metad->btm_root;
- 
- 		ReleaseBuffer(buffer);
- 	}
- 
- 	/* -- init counters -- */
- 	indexStat.root_pages = 0;
- 	indexStat.internal_pages = 0;
- 	indexStat.leaf_pages = 0;
- 	indexStat.empty_pages = 0;
- 	indexStat.deleted_pages = 0;
- 
- 	indexStat.max_avail = 0;
- 	indexStat.free_space = 0;
- 
- 	indexStat.fragments = 0;
- 
- 	/*
- 	 * Scan all blocks except the metapage
- 	 */
- 	nblocks = RelationGetNumberOfBlocks(rel);
- 
- 	for (blkno = 1; blkno < nblocks; blkno++)
- 	{
- 		Buffer		buffer;
- 		Page		page;
- 		BTPageOpaque opaque;
- 
- 		CHECK_FOR_INTERRUPTS();
- 
- 		/* Read and lock buffer */
- 		buffer = ReadBuffer(rel, blkno);
- 		LockBuffer(buffer, BUFFER_LOCK_SHARE);
- 
- 		page = BufferGetPage(buffer);
- 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
- 
- 		/* Determine page type, and update totals */
- 
- 		if (P_ISLEAF(opaque))
- 		{
- 			int			max_avail;
- 
- 			max_avail = BLCKSZ - (BLCKSZ - ((PageHeader) page)->pd_special + SizeOfPageHeaderData);
- 			indexStat.max_avail += max_avail;
- 			indexStat.free_space += PageGetFreeSpace(page);
- 
- 			indexStat.leaf_pages++;
- 
- 			/*
- 			 * If the next leaf is on an earlier block, it means a
- 			 * fragmentation.
- 			 */
- 			if (opaque->btpo_next != P_NONE && opaque->btpo_next < blkno)
- 				indexStat.fragments++;
- 		}
- 		else if (P_ISDELETED(opaque))
- 			indexStat.deleted_pages++;
- 		else if (P_IGNORE(opaque))
- 			indexStat.empty_pages++;
- 		else if (P_ISROOT(opaque))
- 			indexStat.root_pages++;
- 		else
- 			indexStat.internal_pages++;
- 
- 		/* Unlock and release buffer */
- 		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
- 		ReleaseBuffer(buffer);
- 	}
- 
- 	relation_close(rel, AccessShareLock);
- 
- 	/*----------------------------
- 	 * Build a result tuple
- 	 *----------------------------
- 	 */
- 	{
- 		TupleDesc	tupleDesc;
- 		int			j;
- 		char	   *values[10];
- 		HeapTuple	tuple;
- 
- 		/* Build a tuple descriptor for our result type */
- 		if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
- 			elog(ERROR, "return type must be a row type");
- 
- 		j = 0;
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, "%d", indexStat.version);
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, "%d", indexStat.level);
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, INT64_FORMAT,
- 				 (indexStat.root_pages +
- 				  indexStat.leaf_pages +
- 				  indexStat.internal_pages +
- 				  indexStat.deleted_pages +
- 				  indexStat.empty_pages) * BLCKSZ);
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, "%u", indexStat.root_blkno);
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, INT64_FORMAT, indexStat.internal_pages);
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, INT64_FORMAT, indexStat.leaf_pages);
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, INT64_FORMAT, indexStat.empty_pages);
- 		values[j] = palloc(32);
- 		snprintf(values[j++], 32, INT64_FORMAT, indexStat.deleted_pages);
- 		values[j] = palloc(32);
- 		if (indexStat.max_avail > 0)
- 			snprintf(values[j++], 32, "%.2f",
- 					 100.0 - (double) indexStat.free_space / (double) indexStat.max_avail * 100.0);
- 		else
- 			snprintf(values[j++], 32, "NaN");
- 		values[j] = palloc(32);
- 		if (indexStat.leaf_pages > 0)
- 			snprintf(values[j++], 32, "%.2f",
- 					 (double) indexStat.fragments / (double) indexStat.leaf_pages * 100.0);
- 		else
- 			snprintf(values[j++], 32, "NaN");
- 
- 		tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
- 									   values);
- 
- 		result = HeapTupleGetDatum(tuple);
- 	}
- 
- 	PG_RETURN_DATUM(result);
- }
- 
- /* --------------------------------------------------------
-  * pg_relpages()
-  *
-  * Get the number of pages of the table/index.
-  *
-  * Usage: SELECT pg_relpages('t1');
-  *		  SELECT pg_relpages('t1_pkey');
-  * --------------------------------------------------------
-  */
- Datum
- pg_relpages(PG_FUNCTION_ARGS)
- {
- 	text	   *relname = PG_GETARG_TEXT_P(0);
- 	int64		relpages;
- 	Relation	rel;
- 	RangeVar   *relrv;
- 
- 	if (!superuser())
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- 				 (errmsg("must be superuser to use pgstattuple functions"))));
- 
- 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
- 	rel = relation_openrv(relrv, AccessShareLock);
- 
- 	/* note: this will work OK on non-local temp tables */
- 
- 	relpages = RelationGetNumberOfBlocks(rel);
- 
- 	relation_close(rel, AccessShareLock);
- 
- 	PG_RETURN_INT64(relpages);
- }
--- 0 ----
diff --git a/contrib/pgstattuple/pgstattuple--1.0.sql b/contrib/pgstattuple/pgstattuple--1.0.sql
index f7e0308..e69de29 100644
*** a/contrib/pgstattuple/pgstattuple--1.0.sql
--- b/contrib/pgstattuple/pgstattuple--1.0.sql
***************
*** 1,49 ****
- /* contrib/pgstattuple/pgstattuple--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pgstattuple" to load this file. \quit
- 
- CREATE FUNCTION pgstattuple(IN relname text,
-     OUT table_len BIGINT,		-- physical table length in bytes
-     OUT tuple_count BIGINT,		-- number of live tuples
-     OUT tuple_len BIGINT,		-- total tuples length in bytes
-     OUT tuple_percent FLOAT8,		-- live tuples in %
-     OUT dead_tuple_count BIGINT,	-- number of dead tuples
-     OUT dead_tuple_len BIGINT,		-- total dead tuples length in bytes
-     OUT dead_tuple_percent FLOAT8,	-- dead tuples in %
-     OUT free_space BIGINT,		-- free space in bytes
-     OUT free_percent FLOAT8)		-- free space in %
- AS 'MODULE_PATHNAME', 'pgstattuple'
- LANGUAGE C STRICT;
- 
- CREATE FUNCTION pgstattuple(IN reloid oid,
-     OUT table_len BIGINT,		-- physical table length in bytes
-     OUT tuple_count BIGINT,		-- number of live tuples
-     OUT tuple_len BIGINT,		-- total tuples length in bytes
-     OUT tuple_percent FLOAT8,		-- live tuples in %
-     OUT dead_tuple_count BIGINT,	-- number of dead tuples
-     OUT dead_tuple_len BIGINT,		-- total dead tuples length in bytes
-     OUT dead_tuple_percent FLOAT8,	-- dead tuples in %
-     OUT free_space BIGINT,		-- free space in bytes
-     OUT free_percent FLOAT8)		-- free space in %
- AS 'MODULE_PATHNAME', 'pgstattuplebyid'
- LANGUAGE C STRICT;
- 
- CREATE FUNCTION pgstatindex(IN relname text,
-     OUT version INT,
-     OUT tree_level INT,
-     OUT index_size BIGINT,
-     OUT root_block_no BIGINT,
-     OUT internal_pages BIGINT,
-     OUT leaf_pages BIGINT,
-     OUT empty_pages BIGINT,
-     OUT deleted_pages BIGINT,
-     OUT avg_leaf_density FLOAT8,
-     OUT leaf_fragmentation FLOAT8)
- AS 'MODULE_PATHNAME', 'pgstatindex'
- LANGUAGE C STRICT;
- 
- CREATE FUNCTION pg_relpages(IN relname text)
- RETURNS BIGINT
- AS 'MODULE_PATHNAME', 'pg_relpages'
- LANGUAGE C STRICT;
--- 0 ----
diff --git a/contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql b/contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql
index 14b63ca..e69de29 100644
*** a/contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql
--- b/contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql
***************
*** 1,9 ****
- /* contrib/pgstattuple/pgstattuple--unpackaged--1.0.sql */
- 
- -- complain if script is sourced in psql, rather than via CREATE EXTENSION
- \echo Use "CREATE EXTENSION pgstattuple" to load this file. \quit
- 
- ALTER EXTENSION pgstattuple ADD function pgstattuple(text);
- ALTER EXTENSION pgstattuple ADD function pgstattuple(oid);
- ALTER EXTENSION pgstattuple ADD function pgstatindex(text);
- ALTER EXTENSION pgstattuple ADD function pg_relpages(text);
--- 0 ----
diff --git a/contrib/pgstattuple/pgstattuple.c b/contrib/pgstattuple/pgstattuple.c
index e5ddd87..e69de29 100644
*** a/contrib/pgstattuple/pgstattuple.c
--- b/contrib/pgstattuple/pgstattuple.c
***************
*** 1,518 ****
- /*
-  * contrib/pgstattuple/pgstattuple.c
-  *
-  * Copyright (c) 2001,2002	Tatsuo Ishii
-  *
-  * Permission to use, copy, modify, and distribute this software and
-  * its documentation for any purpose, without fee, and without a
-  * written agreement is hereby granted, provided that the above
-  * copyright notice and this paragraph and the following two
-  * paragraphs appear in all copies.
-  *
-  * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
-  * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
-  * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
-  * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
-  * OF THE POSSIBILITY OF SUCH DAMAGE.
-  *
-  * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
-  * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-  * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
-  * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
-  * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
-  */
- 
- #include "postgres.h"
- 
- #include "access/gist_private.h"
- #include "access/hash.h"
- #include "access/nbtree.h"
- #include "access/relscan.h"
- #include "catalog/namespace.h"
- #include "funcapi.h"
- #include "miscadmin.h"
- #include "storage/bufmgr.h"
- #include "storage/lmgr.h"
- #include "utils/builtins.h"
- #include "utils/tqual.h"
- 
- 
- PG_MODULE_MAGIC;
- 
- PG_FUNCTION_INFO_V1(pgstattuple);
- PG_FUNCTION_INFO_V1(pgstattuplebyid);
- 
- extern Datum pgstattuple(PG_FUNCTION_ARGS);
- extern Datum pgstattuplebyid(PG_FUNCTION_ARGS);
- 
- /*
-  * struct pgstattuple_type
-  *
-  * tuple_percent, dead_tuple_percent and free_percent are computable,
-  * so not defined here.
-  */
- typedef struct pgstattuple_type
- {
- 	uint64		table_len;
- 	uint64		tuple_count;
- 	uint64		tuple_len;
- 	uint64		dead_tuple_count;
- 	uint64		dead_tuple_len;
- 	uint64		free_space;		/* free/reusable space in bytes */
- } pgstattuple_type;
- 
- typedef void (*pgstat_page) (pgstattuple_type *, Relation, BlockNumber);
- 
- static Datum build_pgstattuple_type(pgstattuple_type *stat,
- 					   FunctionCallInfo fcinfo);
- static Datum pgstat_relation(Relation rel, FunctionCallInfo fcinfo);
- static Datum pgstat_heap(Relation rel, FunctionCallInfo fcinfo);
- static void pgstat_btree_page(pgstattuple_type *stat,
- 				  Relation rel, BlockNumber blkno);
- static void pgstat_hash_page(pgstattuple_type *stat,
- 				 Relation rel, BlockNumber blkno);
- static void pgstat_gist_page(pgstattuple_type *stat,
- 				 Relation rel, BlockNumber blkno);
- static Datum pgstat_index(Relation rel, BlockNumber start,
- 			 pgstat_page pagefn, FunctionCallInfo fcinfo);
- static void pgstat_index_page(pgstattuple_type *stat, Page page,
- 				  OffsetNumber minoff, OffsetNumber maxoff);
- 
- /*
-  * build_pgstattuple_type -- build a pgstattuple_type tuple
-  */
- static Datum
- build_pgstattuple_type(pgstattuple_type *stat, FunctionCallInfo fcinfo)
- {
- #define NCOLUMNS	9
- #define NCHARS		32
- 
- 	HeapTuple	tuple;
- 	char	   *values[NCOLUMNS];
- 	char		values_buf[NCOLUMNS][NCHARS];
- 	int			i;
- 	double		tuple_percent;
- 	double		dead_tuple_percent;
- 	double		free_percent;	/* free/reusable space in % */
- 	TupleDesc	tupdesc;
- 	AttInMetadata *attinmeta;
- 
- 	/* Build a tuple descriptor for our result type */
- 	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
- 		elog(ERROR, "return type must be a row type");
- 
- 	/*
- 	 * Generate attribute metadata needed later to produce tuples from raw C
- 	 * strings
- 	 */
- 	attinmeta = TupleDescGetAttInMetadata(tupdesc);
- 
- 	if (stat->table_len == 0)
- 	{
- 		tuple_percent = 0.0;
- 		dead_tuple_percent = 0.0;
- 		free_percent = 0.0;
- 	}
- 	else
- 	{
- 		tuple_percent = 100.0 * stat->tuple_len / stat->table_len;
- 		dead_tuple_percent = 100.0 * stat->dead_tuple_len / stat->table_len;
- 		free_percent = 100.0 * stat->free_space / stat->table_len;
- 	}
- 
- 	/*
- 	 * Prepare a values array for constructing the tuple. This should be an
- 	 * array of C strings which will be processed later by the appropriate
- 	 * "in" functions.
- 	 */
- 	for (i = 0; i < NCOLUMNS; i++)
- 		values[i] = values_buf[i];
- 	i = 0;
- 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->table_len);
- 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_count);
- 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_len);
- 	snprintf(values[i++], NCHARS, "%.2f", tuple_percent);
- 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_count);
- 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_len);
- 	snprintf(values[i++], NCHARS, "%.2f", dead_tuple_percent);
- 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->free_space);
- 	snprintf(values[i++], NCHARS, "%.2f", free_percent);
- 
- 	/* build a tuple */
- 	tuple = BuildTupleFromCStrings(attinmeta, values);
- 
- 	/* make the tuple into a datum */
- 	return HeapTupleGetDatum(tuple);
- }
- 
- /* ----------
-  * pgstattuple:
-  * returns live/dead tuples info
-  *
-  * C FUNCTION definition
-  * pgstattuple(text) returns pgstattuple_type
-  * see pgstattuple.sql for pgstattuple_type
-  * ----------
-  */
- 
- Datum
- pgstattuple(PG_FUNCTION_ARGS)
- {
- 	text	   *relname = PG_GETARG_TEXT_P(0);
- 	RangeVar   *relrv;
- 	Relation	rel;
- 
- 	if (!superuser())
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- 				 (errmsg("must be superuser to use pgstattuple functions"))));
- 
- 	/* open relation */
- 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
- 	rel = relation_openrv(relrv, AccessShareLock);
- 
- 	PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
- }
- 
- Datum
- pgstattuplebyid(PG_FUNCTION_ARGS)
- {
- 	Oid			relid = PG_GETARG_OID(0);
- 	Relation	rel;
- 
- 	if (!superuser())
- 		ereport(ERROR,
- 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- 				 (errmsg("must be superuser to use pgstattuple functions"))));
- 
- 	/* open relation */
- 	rel = relation_open(relid, AccessShareLock);
- 
- 	PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
- }
- 
- /*
-  * pgstat_relation
-  */
- static Datum
- pgstat_relation(Relation rel, FunctionCallInfo fcinfo)
- {
- 	const char *err;
- 
- 	/*
- 	 * Reject attempts to read non-local temporary relations; we would be
- 	 * likely to get wrong data since we have no visibility into the owning
- 	 * session's local buffers.
- 	 */
- 	if (RELATION_IS_OTHER_TEMP(rel))
- 		ereport(ERROR,
- 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- 				 errmsg("cannot access temporary tables of other sessions")));
- 
- 	switch (rel->rd_rel->relkind)
- 	{
- 		case RELKIND_RELATION:
- 		case RELKIND_TOASTVALUE:
- 		case RELKIND_UNCATALOGED:
- 		case RELKIND_SEQUENCE:
- 			return pgstat_heap(rel, fcinfo);
- 		case RELKIND_INDEX:
- 			switch (rel->rd_rel->relam)
- 			{
- 				case BTREE_AM_OID:
- 					return pgstat_index(rel, BTREE_METAPAGE + 1,
- 										pgstat_btree_page, fcinfo);
- 				case HASH_AM_OID:
- 					return pgstat_index(rel, HASH_METAPAGE + 1,
- 										pgstat_hash_page, fcinfo);
- 				case GIST_AM_OID:
- 					return pgstat_index(rel, GIST_ROOT_BLKNO + 1,
- 										pgstat_gist_page, fcinfo);
- 				case GIN_AM_OID:
- 					err = "gin index";
- 					break;
- 				default:
- 					err = "unknown index";
- 					break;
- 			}
- 			break;
- 		case RELKIND_VIEW:
- 			err = "view";
- 			break;
- 		case RELKIND_COMPOSITE_TYPE:
- 			err = "composite type";
- 			break;
- 		case RELKIND_FOREIGN_TABLE:
- 			err = "foreign table";
- 			break;
- 		default:
- 			err = "unknown";
- 			break;
- 	}
- 
- 	ereport(ERROR,
- 			(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
- 			 errmsg("\"%s\" (%s) is not supported",
- 					RelationGetRelationName(rel), err)));
- 	return 0;					/* should not happen */
- }
- 
- /*
-  * pgstat_heap -- returns live/dead tuples info in a heap
-  */
- static Datum
- pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
- {
- 	HeapScanDesc scan;
- 	HeapTuple	tuple;
- 	BlockNumber nblocks;
- 	BlockNumber block = 0;		/* next block to count free space in */
- 	BlockNumber tupblock;
- 	Buffer		buffer;
- 	pgstattuple_type stat = {0};
- 
- 	/* Disable syncscan because we assume we scan from block zero upwards */
- 	scan = heap_beginscan_strat(rel, SnapshotAny, 0, NULL, true, false);
- 
- 	nblocks = scan->rs_nblocks; /* # blocks to be scanned */
- 
- 	/* scan the relation */
- 	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
- 	{
- 		CHECK_FOR_INTERRUPTS();
- 
- 		/* must hold a buffer lock to call HeapTupleSatisfiesVisibility */
- 		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
- 
- 		if (HeapTupleSatisfiesVisibility(tuple, SnapshotNow, scan->rs_cbuf))
- 		{
- 			stat.tuple_len += tuple->t_len;
- 			stat.tuple_count++;
- 		}
- 		else
- 		{
- 			stat.dead_tuple_len += tuple->t_len;
- 			stat.dead_tuple_count++;
- 		}
- 
- 		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
- 
- 		/*
- 		 * To avoid physically reading the table twice, try to do the
- 		 * free-space scan in parallel with the heap scan.	However,
- 		 * heap_getnext may find no tuples on a given page, so we cannot
- 		 * simply examine the pages returned by the heap scan.
- 		 */
- 		tupblock = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);
- 
- 		while (block <= tupblock)
- 		{
- 			CHECK_FOR_INTERRUPTS();
- 
- 			buffer = ReadBuffer(rel, block);
- 			LockBuffer(buffer, BUFFER_LOCK_SHARE);
- 			stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
- 			UnlockReleaseBuffer(buffer);
- 			block++;
- 		}
- 	}
- 	heap_endscan(scan);
- 
- 	while (block < nblocks)
- 	{
- 		CHECK_FOR_INTERRUPTS();
- 
- 		buffer = ReadBuffer(rel, block);
- 		LockBuffer(buffer, BUFFER_LOCK_SHARE);
- 		stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
- 		UnlockReleaseBuffer(buffer);
- 		block++;
- 	}
- 
- 	relation_close(rel, AccessShareLock);
- 
- 	stat.table_len = (uint64) nblocks *BLCKSZ;
- 
- 	return build_pgstattuple_type(&stat, fcinfo);
- }
- 
- /*
-  * pgstat_btree_page -- check tuples in a btree page
-  */
- static void
- pgstat_btree_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
- {
- 	Buffer		buf;
- 	Page		page;
- 
- 	buf = ReadBuffer(rel, blkno);
- 	LockBuffer(buf, BT_READ);
- 	page = BufferGetPage(buf);
- 
- 	/* Page is valid, see what to do with it */
- 	if (PageIsNew(page))
- 	{
- 		/* fully empty page */
- 		stat->free_space += BLCKSZ;
- 	}
- 	else
- 	{
- 		BTPageOpaque opaque;
- 
- 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
- 		if (opaque->btpo_flags & (BTP_DELETED | BTP_HALF_DEAD))
- 		{
- 			/* recyclable page */
- 			stat->free_space += BLCKSZ;
- 		}
- 		else if (P_ISLEAF(opaque))
- 		{
- 			pgstat_index_page(stat, page, P_FIRSTDATAKEY(opaque),
- 							  PageGetMaxOffsetNumber(page));
- 		}
- 		else
- 		{
- 			/* root or node */
- 		}
- 	}
- 
- 	_bt_relbuf(rel, buf);
- }
- 
- /*
-  * pgstat_hash_page -- check tuples in a hash page
-  */
- static void
- pgstat_hash_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
- {
- 	Buffer		buf;
- 	Page		page;
- 
- 	_hash_getlock(rel, blkno, HASH_SHARE);
- 	buf = _hash_getbuf(rel, blkno, HASH_READ, 0);
- 	page = BufferGetPage(buf);
- 
- 	if (PageGetSpecialSize(page) == MAXALIGN(sizeof(HashPageOpaqueData)))
- 	{
- 		HashPageOpaque opaque;
- 
- 		opaque = (HashPageOpaque) PageGetSpecialPointer(page);
- 		switch (opaque->hasho_flag)
- 		{
- 			case LH_UNUSED_PAGE:
- 				stat->free_space += BLCKSZ;
- 				break;
- 			case LH_BUCKET_PAGE:
- 			case LH_OVERFLOW_PAGE:
- 				pgstat_index_page(stat, page, FirstOffsetNumber,
- 								  PageGetMaxOffsetNumber(page));
- 				break;
- 			case LH_BITMAP_PAGE:
- 			case LH_META_PAGE:
- 			default:
- 				break;
- 		}
- 	}
- 	else
- 	{
- 		/* maybe corrupted */
- 	}
- 
- 	_hash_relbuf(rel, buf);
- 	_hash_droplock(rel, blkno, HASH_SHARE);
- }
- 
- /*
-  * pgstat_gist_page -- check tuples in a gist page
-  */
- static void
- pgstat_gist_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
- {
- 	Buffer		buf;
- 	Page		page;
- 
- 	buf = ReadBuffer(rel, blkno);
- 	LockBuffer(buf, GIST_SHARE);
- 	gistcheckpage(rel, buf);
- 	page = BufferGetPage(buf);
- 
- 	if (GistPageIsLeaf(page))
- 	{
- 		pgstat_index_page(stat, page, FirstOffsetNumber,
- 						  PageGetMaxOffsetNumber(page));
- 	}
- 	else
- 	{
- 		/* root or node */
- 	}
- 
- 	UnlockReleaseBuffer(buf);
- }
- 
- /*
-  * pgstat_index -- returns live/dead tuples info in a generic index
-  */
- static Datum
- pgstat_index(Relation rel, BlockNumber start, pgstat_page pagefn,
- 			 FunctionCallInfo fcinfo)
- {
- 	BlockNumber nblocks;
- 	BlockNumber blkno;
- 	pgstattuple_type stat = {0};
- 
- 	blkno = start;
- 	for (;;)
- 	{
- 		/* Get the current relation length */
- 		LockRelationForExtension(rel, ExclusiveLock);
- 		nblocks = RelationGetNumberOfBlocks(rel);
- 		UnlockRelationForExtension(rel, ExclusiveLock);
- 
- 		/* Quit if we've scanned the whole relation */
- 		if (blkno >= nblocks)
- 		{
- 			stat.table_len = (uint64) nblocks *BLCKSZ;
- 
- 			break;
- 		}
- 
- 		for (; blkno < nblocks; blkno++)
- 		{
- 			CHECK_FOR_INTERRUPTS();
- 
- 			pagefn(&stat, rel, blkno);
- 		}
- 	}
- 
- 	relation_close(rel, AccessShareLock);
- 
- 	return build_pgstattuple_type(&stat, fcinfo);
- }
- 
- /*
-  * pgstat_index_page -- for generic index page
-  */
- static void
- pgstat_index_page(pgstattuple_type *stat, Page page,
- 				  OffsetNumber minoff, OffsetNumber maxoff)
- {
- 	OffsetNumber i;
- 
- 	stat->free_space += PageGetFreeSpace(page);
- 
- 	for (i = minoff; i <= maxoff; i = OffsetNumberNext(i))
- 	{
- 		ItemId		itemid = PageGetItemId(page, i);
- 
- 		if (ItemIdIsDead(itemid))
- 		{
- 			stat->dead_tuple_count++;
- 			stat->dead_tuple_len += ItemIdGetLength(itemid);
- 		}
- 		else
- 		{
- 			stat->tuple_count++;
- 			stat->tuple_len += ItemIdGetLength(itemid);
- 		}
- 	}
- }
--- 0 ----
diff --git a/contrib/pgstattuple/pgstattuple.control b/contrib/pgstattuple/pgstattuple.control
index 7b5129b..e69de29 100644
*** a/contrib/pgstattuple/pgstattuple.control
--- b/contrib/pgstattuple/pgstattuple.control
***************
*** 1,5 ****
- # pgstattuple extension
- comment = 'show tuple-level statistics'
- default_version = '1.0'
- module_pathname = '$libdir/pgstattuple'
- relocatable = true
--- 0 ----
diff --git a/contrib/pgstattuple/sql/pgstattuple.sql b/contrib/pgstattuple/sql/pgstattuple.sql
index 2fd1152..e69de29 100644
*** a/contrib/pgstattuple/sql/pgstattuple.sql
--- b/contrib/pgstattuple/sql/pgstattuple.sql
***************
*** 1,17 ****
- CREATE EXTENSION pgstattuple;
- 
- --
- -- It's difficult to come up with platform-independent test cases for
- -- the pgstattuple functions, but the results for empty tables and
- -- indexes should be that.
- --
- 
- create table test (a int primary key);
- 
- select * from pgstattuple('test'::text);
- select * from pgstattuple('test'::regclass);
- 
- select * from pgstatindex('test_pkey');
- 
- select pg_relpages('test');
- select pg_relpages('test_pkey');
--- 0 ----
diff --git a/doc/src/sgml/contrib.sgml b/doc/src/sgml/contrib.sgml
index adf09ca..0d16084 100644
*** a/doc/src/sgml/contrib.sgml
--- b/doc/src/sgml/contrib.sgml
*************** CREATE EXTENSION <replaceable>module_nam
*** 89,95 ****
  
   &adminpack;
   &auth-delay;
-  &auto-explain;
   &btree-gin;
   &btree-gist;
   &chkpass;
--- 89,94 ----
*************** CREATE EXTENSION <replaceable>module_nam
*** 109,125 ****
   &lo;
   &ltree;
   &oid2name;
-  &pageinspect;
   &passwordcheck;
   &pgarchivecleanup;
   &pgbench;
-  &pgbuffercache;
   &pgcrypto;
-  &pgfreespacemap;
-  &pgrowlocks;
   &pgstandby;
-  &pgstatstatements;
-  &pgstattuple;
   &pgtestfsync;
   &pgtrgm;
   &pgupgrade;
--- 108,118 ----
diff --git a/doc/src/sgml/extensions.sgml b/doc/src/sgml/extensions.sgml
index ...eea69c3 .
*** a/doc/src/sgml/extensions.sgml
--- b/doc/src/sgml/extensions.sgml
***************
*** 0 ****
--- 1,77 ----
+ <!-- doc/src/sgml/extensions.sgml -->
+ 
+ <appendix id="extensions">
+  <title>Core Extensions</title>
+ 
+  <para>
+  
+   It is difficult to manage all of the components to
+   <productname>PostgreSQL</productname> without making the database core
+   larger than it must be.  But many enhancements can be efficiently developed
+   using the facilites normally intended for adding external modules. 
+   This appendix contains information regarding core extensions that are
+   built and included with a standard installation of PostgreSQL.  These
+   core extensions supply useful features in areas such as database diagnostics
+   and performance monitoring.
+  </para>
+ 
+  <para>  
+   Some of these features could instead be made available as built-in functions.
+   Providing them as extension modules instead reduces the amount of code to be
+   maintained in the main database.  It also serves as an example of how
+   powerful the extension features described in <xref linkend="extend"> are.  It
+   is possible to write your own extensions of similar utility to those listed
+   here, and to use these as examples for doing so.
+  </para>
+ 
+  <para>
+   To make use of one of these extensions, you need to register the new SQL
+   objects in the database system.  This is done by executing a
+   <xref linkend="sql-createextension"> command.  In a fresh database,
+   you can simply do
+ 
+ <programlisting>
+ CREATE EXTENSION <replaceable>module_name</>;
+ </programlisting>
+ 
+   This command must be run by a database superuser.  This registers the
+   new SQL objects in the current database only, so you need to run this
+   command in each database that you want
+   the module's facilities to be available in.  Alternatively, run it in
+   database <literal>template1</> so that the extension will be copied into
+   subsequently-created databases by default.
+  </para>
+ 
+  <para>
+   Many modules allow you to install their objects in a schema of your
+   choice.  To do that, add <literal>SCHEMA
+   <replaceable>schema_name</></literal> to the <command>CREATE EXTENSION</>
+   command.  By default, the objects will be placed in your current creation
+   target schema, typically <literal>public</>.
+  </para>
+ 
+  <para>
+   If your database was brought forward by dump and reload from a pre-9.1
+   version of <productname>PostgreSQL</>, and you had been using the pre-9.1
+   version of the module in it, you should instead do
+ 
+ <programlisting>
+ CREATE EXTENSION <replaceable>module_name</> FROM unpackaged;
+ </programlisting>
+ 
+   This will update the pre-9.1 objects of the module into a proper
+   <firstterm>extension</> object.  Future updates to the module will be
+   managed by <xref linkend="sql-alterextension">.
+   For more information about extension updates, see
+   <xref linkend="extend-extensions">.
+  </para>
+ 
+  &auto-explain;
+  &pageinspect;
+  &pgbuffercache;
+  &pgfreespacemap;
+  &pgrowlocks;
+  &pgstatstatements;
+  &pgstattuple;
+ 
+ </appendix>
diff --git a/doc/src/sgml/external-projects.sgml b/doc/src/sgml/external-projects.sgml
index ef516b4..8b12574 100644
*** a/doc/src/sgml/external-projects.sgml
--- b/doc/src/sgml/external-projects.sgml
***************
*** 246,254 ****
    <para>
     <productname>PostgreSQL</> is designed to be easily extensible. For
     this reason, extensions loaded into the database can function
!    just like features that are built in. The
     <filename>contrib/</> directory shipped with the source code
!    contains several extensions, which are described in
     <xref linkend="contrib">.  Other extensions are developed
     independently, like <application><ulink
     url="http://www.postgis.org/">PostGIS</ulink></>.  Even
--- 246,255 ----
    <para>
     <productname>PostgreSQL</> is designed to be easily extensible. For
     this reason, extensions loaded into the database can function
!    just like features that are built in.  Some <xref linkend="extension">
!    are available in any installation.  The
     <filename>contrib/</> directory shipped with the source code
!    contains several optional extensions, which are described in
     <xref linkend="contrib">.  Other extensions are developed
     independently, like <application><ulink
     url="http://www.postgis.org/">PostGIS</ulink></>.  Even
diff --git a/doc/src/sgml/filelist.sgml b/doc/src/sgml/filelist.sgml
index fb69415..518d6ca 100644
*** a/doc/src/sgml/filelist.sgml
--- b/doc/src/sgml/filelist.sgml
***************
*** 92,102 ****
  <!ENTITY sources    SYSTEM "sources.sgml">
  <!ENTITY storage    SYSTEM "storage.sgml">
  
  <!-- contrib information -->
  <!ENTITY contrib         SYSTEM "contrib.sgml">
  <!ENTITY adminpack       SYSTEM "adminpack.sgml">
  <!ENTITY auth-delay      SYSTEM "auth-delay.sgml">
- <!ENTITY auto-explain    SYSTEM "auto-explain.sgml">
  <!ENTITY btree-gin       SYSTEM "btree-gin.sgml">
  <!ENTITY btree-gist      SYSTEM "btree-gist.sgml">
  <!ENTITY chkpass         SYSTEM "chkpass.sgml">
--- 92,111 ----
  <!ENTITY sources    SYSTEM "sources.sgml">
  <!ENTITY storage    SYSTEM "storage.sgml">
  
+ <!-- core extensions -->
+ <!ENTITY extensions      SYSTEM "extensions.sgml">
+ <!ENTITY auto-explain    SYSTEM "auto-explain.sgml">
+ <!ENTITY pageinspect     SYSTEM "pageinspect.sgml">
+ <!ENTITY pgbuffercache   SYSTEM "pgbuffercache.sgml">
+ <!ENTITY pgfreespacemap  SYSTEM "pgfreespacemap.sgml">
+ <!ENTITY pgrowlocks      SYSTEM "pgrowlocks.sgml">
+ <!ENTITY pgstatstatements SYSTEM "pgstatstatements.sgml">
+ <!ENTITY pgstattuple     SYSTEM "pgstattuple.sgml">
+ 
  <!-- contrib information -->
  <!ENTITY contrib         SYSTEM "contrib.sgml">
  <!ENTITY adminpack       SYSTEM "adminpack.sgml">
  <!ENTITY auth-delay      SYSTEM "auth-delay.sgml">
  <!ENTITY btree-gin       SYSTEM "btree-gin.sgml">
  <!ENTITY btree-gist      SYSTEM "btree-gist.sgml">
  <!ENTITY chkpass         SYSTEM "chkpass.sgml">
***************
*** 116,132 ****
  <!ENTITY lo              SYSTEM "lo.sgml">
  <!ENTITY ltree           SYSTEM "ltree.sgml">
  <!ENTITY oid2name        SYSTEM "oid2name.sgml">
- <!ENTITY pageinspect     SYSTEM "pageinspect.sgml">
  <!ENTITY passwordcheck   SYSTEM "passwordcheck.sgml">
  <!ENTITY pgbench         SYSTEM "pgbench.sgml">
  <!ENTITY pgarchivecleanup SYSTEM "pgarchivecleanup.sgml">
- <!ENTITY pgbuffercache   SYSTEM "pgbuffercache.sgml">
  <!ENTITY pgcrypto        SYSTEM "pgcrypto.sgml">
- <!ENTITY pgfreespacemap  SYSTEM "pgfreespacemap.sgml">
- <!ENTITY pgrowlocks      SYSTEM "pgrowlocks.sgml">
  <!ENTITY pgstandby       SYSTEM "pgstandby.sgml">
- <!ENTITY pgstatstatements SYSTEM "pgstatstatements.sgml">
- <!ENTITY pgstattuple     SYSTEM "pgstattuple.sgml">
  <!ENTITY pgtestfsync     SYSTEM "pgtestfsync.sgml">
  <!ENTITY pgtrgm          SYSTEM "pgtrgm.sgml">
  <!ENTITY pgupgrade       SYSTEM "pgupgrade.sgml">
--- 125,135 ----
diff --git a/doc/src/sgml/postgres.sgml b/doc/src/sgml/postgres.sgml
index ac1da22..1a90267 100644
*** a/doc/src/sgml/postgres.sgml
--- b/doc/src/sgml/postgres.sgml
***************
*** 257,262 ****
--- 257,263 ----
    &keywords;
    &features;
    &release;
+   &extensions;
    &contrib;
    &external-projects;
    &sourcerepo;
diff --git a/src/Makefile b/src/Makefile
index a046034..87d6e2c 100644
*** a/src/Makefile
--- b/src/Makefile
*************** SUBDIRS = \
*** 24,29 ****
--- 24,30 ----
  	bin \
  	pl \
  	makefiles \
+ 	extension \
  	test/regress
  
  # There are too many interdependencies between the subdirectories, so
diff --git a/src/extension/Makefile b/src/extension/Makefile
index ...4e02cfc .
*** a/src/extension/Makefile
--- b/src/extension/Makefile
***************
*** 0 ****
--- 1,17 ----
+ # src/extension/Makefile
+ 
+ subdir = src/extension
+ top_builddir = ../..
+ include $(top_builddir)/src/Makefile.global
+ 
+ SUBDIRS = \
+ 		auto_explain	\
+ 		pageinspect	\
+ 		pg_buffercache \
+ 		pg_freespacemap \
+ 		pg_stat_statements \
+ 		pgrowlocks	\
+ 		pgstattuple
+ 
+ $(recurse)
+ $(recurse_always)
diff --git a/src/extension/auto_explain/Makefile b/src/extension/auto_explain/Makefile
index ...2fb420d .
*** a/src/extension/auto_explain/Makefile
--- b/src/extension/auto_explain/Makefile
***************
*** 0 ****
--- 1,15 ----
+ # src/extension/auto_explain/Makefile
+ 
+ MODULE_big = auto_explain
+ OBJS = auto_explain.o
+ 
+ ifdef USE_PGXS
+ PG_CONFIG = pg_config
+ PGXS := $(shell $(PG_CONFIG) --pgxs)
+ include $(PGXS)
+ else
+ subdir = src/extension/auto_explain
+ top_builddir = ../../..
+ include $(top_builddir)/src/Makefile.global
+ include $(top_srcdir)/src/extension/extension-global.mk
+ endif
diff --git a/src/extension/auto_explain/auto_explain.c b/src/extension/auto_explain/auto_explain.c
index ...647f6d0 .
*** a/src/extension/auto_explain/auto_explain.c
--- b/src/extension/auto_explain/auto_explain.c
***************
*** 0 ****
--- 1,304 ----
+ /*-------------------------------------------------------------------------
+  *
+  * auto_explain.c
+  *
+  *
+  * Copyright (c) 2008-2011, PostgreSQL Global Development Group
+  *
+  * IDENTIFICATION
+  *	  src/extension/auto_explain/auto_explain.c
+  *
+  *-------------------------------------------------------------------------
+  */
+ #include "postgres.h"
+ 
+ #include "commands/explain.h"
+ #include "executor/instrument.h"
+ #include "utils/guc.h"
+ 
+ PG_MODULE_MAGIC;
+ 
+ /* GUC variables */
+ static int	auto_explain_log_min_duration = -1; /* msec or -1 */
+ static bool auto_explain_log_analyze = false;
+ static bool auto_explain_log_verbose = false;
+ static bool auto_explain_log_buffers = false;
+ static int	auto_explain_log_format = EXPLAIN_FORMAT_TEXT;
+ static bool auto_explain_log_nested_statements = false;
+ 
+ static const struct config_enum_entry format_options[] = {
+ 	{"text", EXPLAIN_FORMAT_TEXT, false},
+ 	{"xml", EXPLAIN_FORMAT_XML, false},
+ 	{"json", EXPLAIN_FORMAT_JSON, false},
+ 	{"yaml", EXPLAIN_FORMAT_YAML, false},
+ 	{NULL, 0, false}
+ };
+ 
+ /* Current nesting depth of ExecutorRun calls */
+ static int	nesting_level = 0;
+ 
+ /* Saved hook values in case of unload */
+ static ExecutorStart_hook_type prev_ExecutorStart = NULL;
+ static ExecutorRun_hook_type prev_ExecutorRun = NULL;
+ static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
+ static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
+ 
+ #define auto_explain_enabled() \
+ 	(auto_explain_log_min_duration >= 0 && \
+ 	 (nesting_level == 0 || auto_explain_log_nested_statements))
+ 
+ void		_PG_init(void);
+ void		_PG_fini(void);
+ 
+ static void explain_ExecutorStart(QueryDesc *queryDesc, int eflags);
+ static void explain_ExecutorRun(QueryDesc *queryDesc,
+ 					ScanDirection direction,
+ 					long count);
+ static void explain_ExecutorFinish(QueryDesc *queryDesc);
+ static void explain_ExecutorEnd(QueryDesc *queryDesc);
+ 
+ 
+ /*
+  * Module load callback
+  */
+ void
+ _PG_init(void)
+ {
+ 	/* Define custom GUC variables. */
+ 	DefineCustomIntVariable("auto_explain.log_min_duration",
+ 		 "Sets the minimum execution time above which plans will be logged.",
+ 						 "Zero prints all plans. -1 turns this feature off.",
+ 							&auto_explain_log_min_duration,
+ 							-1,
+ 							-1, INT_MAX / 1000,
+ 							PGC_SUSET,
+ 							GUC_UNIT_MS,
+ 							NULL,
+ 							NULL,
+ 							NULL);
+ 
+ 	DefineCustomBoolVariable("auto_explain.log_analyze",
+ 							 "Use EXPLAIN ANALYZE for plan logging.",
+ 							 NULL,
+ 							 &auto_explain_log_analyze,
+ 							 false,
+ 							 PGC_SUSET,
+ 							 0,
+ 							 NULL,
+ 							 NULL,
+ 							 NULL);
+ 
+ 	DefineCustomBoolVariable("auto_explain.log_verbose",
+ 							 "Use EXPLAIN VERBOSE for plan logging.",
+ 							 NULL,
+ 							 &auto_explain_log_verbose,
+ 							 false,
+ 							 PGC_SUSET,
+ 							 0,
+ 							 NULL,
+ 							 NULL,
+ 							 NULL);
+ 
+ 	DefineCustomBoolVariable("auto_explain.log_buffers",
+ 							 "Log buffers usage.",
+ 							 NULL,
+ 							 &auto_explain_log_buffers,
+ 							 false,
+ 							 PGC_SUSET,
+ 							 0,
+ 							 NULL,
+ 							 NULL,
+ 							 NULL);
+ 
+ 	DefineCustomEnumVariable("auto_explain.log_format",
+ 							 "EXPLAIN format to be used for plan logging.",
+ 							 NULL,
+ 							 &auto_explain_log_format,
+ 							 EXPLAIN_FORMAT_TEXT,
+ 							 format_options,
+ 							 PGC_SUSET,
+ 							 0,
+ 							 NULL,
+ 							 NULL,
+ 							 NULL);
+ 
+ 	DefineCustomBoolVariable("auto_explain.log_nested_statements",
+ 							 "Log nested statements.",
+ 							 NULL,
+ 							 &auto_explain_log_nested_statements,
+ 							 false,
+ 							 PGC_SUSET,
+ 							 0,
+ 							 NULL,
+ 							 NULL,
+ 							 NULL);
+ 
+ 	EmitWarningsOnPlaceholders("auto_explain");
+ 
+ 	/* Install hooks. */
+ 	prev_ExecutorStart = ExecutorStart_hook;
+ 	ExecutorStart_hook = explain_ExecutorStart;
+ 	prev_ExecutorRun = ExecutorRun_hook;
+ 	ExecutorRun_hook = explain_ExecutorRun;
+ 	prev_ExecutorFinish = ExecutorFinish_hook;
+ 	ExecutorFinish_hook = explain_ExecutorFinish;
+ 	prev_ExecutorEnd = ExecutorEnd_hook;
+ 	ExecutorEnd_hook = explain_ExecutorEnd;
+ }
+ 
+ /*
+  * Module unload callback
+  */
+ void
+ _PG_fini(void)
+ {
+ 	/* Uninstall hooks. */
+ 	ExecutorStart_hook = prev_ExecutorStart;
+ 	ExecutorRun_hook = prev_ExecutorRun;
+ 	ExecutorFinish_hook = prev_ExecutorFinish;
+ 	ExecutorEnd_hook = prev_ExecutorEnd;
+ }
+ 
+ /*
+  * ExecutorStart hook: start up logging if needed
+  */
+ static void
+ explain_ExecutorStart(QueryDesc *queryDesc, int eflags)
+ {
+ 	if (auto_explain_enabled())
+ 	{
+ 		/* Enable per-node instrumentation iff log_analyze is required. */
+ 		if (auto_explain_log_analyze && (eflags & EXEC_FLAG_EXPLAIN_ONLY) == 0)
+ 		{
+ 			queryDesc->instrument_options |= INSTRUMENT_TIMER;
+ 			if (auto_explain_log_buffers)
+ 				queryDesc->instrument_options |= INSTRUMENT_BUFFERS;
+ 		}
+ 	}
+ 
+ 	if (prev_ExecutorStart)
+ 		prev_ExecutorStart(queryDesc, eflags);
+ 	else
+ 		standard_ExecutorStart(queryDesc, eflags);
+ 
+ 	if (auto_explain_enabled())
+ 	{
+ 		/*
+ 		 * Set up to track total elapsed time in ExecutorRun.  Make sure the
+ 		 * space is allocated in the per-query context so it will go away at
+ 		 * ExecutorEnd.
+ 		 */
+ 		if (queryDesc->totaltime == NULL)
+ 		{
+ 			MemoryContext oldcxt;
+ 
+ 			oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
+ 			queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
+ 			MemoryContextSwitchTo(oldcxt);
+ 		}
+ 	}
+ }
+ 
+ /*
+  * ExecutorRun hook: all we need do is track nesting depth
+  */
+ static void
+ explain_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
+ {
+ 	nesting_level++;
+ 	PG_TRY();
+ 	{
+ 		if (prev_ExecutorRun)
+ 			prev_ExecutorRun(queryDesc, direction, count);
+ 		else
+ 			standard_ExecutorRun(queryDesc, direction, count);
+ 		nesting_level--;
+ 	}
+ 	PG_CATCH();
+ 	{
+ 		nesting_level--;
+ 		PG_RE_THROW();
+ 	}
+ 	PG_END_TRY();
+ }
+ 
+ /*
+  * ExecutorFinish hook: all we need do is track nesting depth
+  */
+ static void
+ explain_ExecutorFinish(QueryDesc *queryDesc)
+ {
+ 	nesting_level++;
+ 	PG_TRY();
+ 	{
+ 		if (prev_ExecutorFinish)
+ 			prev_ExecutorFinish(queryDesc);
+ 		else
+ 			standard_ExecutorFinish(queryDesc);
+ 		nesting_level--;
+ 	}
+ 	PG_CATCH();
+ 	{
+ 		nesting_level--;
+ 		PG_RE_THROW();
+ 	}
+ 	PG_END_TRY();
+ }
+ 
+ /*
+  * ExecutorEnd hook: log results if needed
+  */
+ static void
+ explain_ExecutorEnd(QueryDesc *queryDesc)
+ {
+ 	if (queryDesc->totaltime && auto_explain_enabled())
+ 	{
+ 		double		msec;
+ 
+ 		/*
+ 		 * Make sure stats accumulation is done.  (Note: it's okay if several
+ 		 * levels of hook all do this.)
+ 		 */
+ 		InstrEndLoop(queryDesc->totaltime);
+ 
+ 		/* Log plan if duration is exceeded. */
+ 		msec = queryDesc->totaltime->total * 1000.0;
+ 		if (msec >= auto_explain_log_min_duration)
+ 		{
+ 			ExplainState es;
+ 
+ 			ExplainInitState(&es);
+ 			es.analyze = (queryDesc->instrument_options && auto_explain_log_analyze);
+ 			es.verbose = auto_explain_log_verbose;
+ 			es.buffers = (es.analyze && auto_explain_log_buffers);
+ 			es.format = auto_explain_log_format;
+ 
+ 			ExplainBeginOutput(&es);
+ 			ExplainQueryText(&es, queryDesc);
+ 			ExplainPrintPlan(&es, queryDesc);
+ 			ExplainEndOutput(&es);
+ 
+ 			/* Remove last line break */
+ 			if (es.str->len > 0 && es.str->data[es.str->len - 1] == '\n')
+ 				es.str->data[--es.str->len] = '\0';
+ 
+ 			/*
+ 			 * Note: we rely on the existing logging of context or
+ 			 * debug_query_string to identify just which statement is being
+ 			 * reported.  This isn't ideal but trying to do it here would
+ 			 * often result in duplication.
+ 			 */
+ 			ereport(LOG,
+ 					(errmsg("duration: %.3f ms  plan:\n%s",
+ 							msec, es.str->data),
+ 					 errhidestmt(true)));
+ 
+ 			pfree(es.str->data);
+ 		}
+ 	}
+ 
+ 	if (prev_ExecutorEnd)
+ 		prev_ExecutorEnd(queryDesc);
+ 	else
+ 		standard_ExecutorEnd(queryDesc);
+ }
diff --git a/src/extension/extension-global.mk b/src/extension/extension-global.mk
index ...bb23e2d .
*** a/src/extension/extension-global.mk
--- b/src/extension/extension-global.mk
***************
*** 0 ****
--- 1,4 ----
+ # src/extension/extension-global.mk
+ 
+ NO_PGXS = 1
+ include $(top_srcdir)/src/makefiles/pgxs.mk
diff --git a/src/extension/pageinspect/Makefile b/src/extension/pageinspect/Makefile
index ...1fb9974 .
*** a/src/extension/pageinspect/Makefile
--- b/src/extension/pageinspect/Makefile
***************
*** 0 ****
--- 1,18 ----
+ # src/extension/pageinspect/Makefile
+ 
+ MODULE_big	= pageinspect
+ OBJS		= rawpage.o heapfuncs.o btreefuncs.o fsmfuncs.o
+ 
+ EXTENSION = pageinspect
+ DATA = pageinspect--1.0.sql pageinspect--unpackaged--1.0.sql
+ 
+ ifdef USE_PGXS
+ PG_CONFIG = pg_config
+ PGXS := $(shell $(PG_CONFIG) --pgxs)
+ include $(PGXS)
+ else
+ subdir = src/extension/pageinspect
+ top_builddir = ../../..
+ include $(top_builddir)/src/Makefile.global
+ include $(top_srcdir)/src/extension/extension-global.mk
+ endif
diff --git a/src/extension/pageinspect/btreefuncs.c b/src/extension/pageinspect/btreefuncs.c
index ...30bfd27 .
*** a/src/extension/pageinspect/btreefuncs.c
--- b/src/extension/pageinspect/btreefuncs.c
***************
*** 0 ****
--- 1,500 ----
+ /*
+  * src/extension/pageinspect/btreefuncs.c
+  *
+  *
+  * btreefuncs.c
+  *
+  * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
+  *
+  * Permission to use, copy, modify, and distribute this software and
+  * its documentation for any purpose, without fee, and without a
+  * written agreement is hereby granted, provided that the above
+  * copyright notice and this paragraph and the following two
+  * paragraphs appear in all copies.
+  *
+  * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+  * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+  * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+  * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+  * OF THE POSSIBILITY OF SUCH DAMAGE.
+  *
+  * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+  * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+  * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+  * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+  */
+ 
+ #include "postgres.h"
+ 
+ #include "access/nbtree.h"
+ #include "catalog/namespace.h"
+ #include "funcapi.h"
+ #include "miscadmin.h"
+ #include "utils/builtins.h"
+ #include "utils/rel.h"
+ 
+ 
+ extern Datum bt_metap(PG_FUNCTION_ARGS);
+ extern Datum bt_page_items(PG_FUNCTION_ARGS);
+ extern Datum bt_page_stats(PG_FUNCTION_ARGS);
+ 
+ PG_FUNCTION_INFO_V1(bt_metap);
+ PG_FUNCTION_INFO_V1(bt_page_items);
+ PG_FUNCTION_INFO_V1(bt_page_stats);
+ 
+ #define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
+ #define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
+ 
+ #define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
+ 		if ( !(FirstOffsetNumber <= (offnum) && \
+ 						(offnum) <= PageGetMaxOffsetNumber(pg)) ) \
+ 			 elog(ERROR, "page offset number out of range"); }
+ 
+ /* note: BlockNumber is unsigned, hence can't be negative */
+ #define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
+ 		if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
+ 			 elog(ERROR, "block number out of range"); }
+ 
+ /* ------------------------------------------------
+  * structure for single btree page statistics
+  * ------------------------------------------------
+  */
+ typedef struct BTPageStat
+ {
+ 	uint32		blkno;
+ 	uint32		live_items;
+ 	uint32		dead_items;
+ 	uint32		page_size;
+ 	uint32		max_avail;
+ 	uint32		free_size;
+ 	uint32		avg_item_size;
+ 	char		type;
+ 
+ 	/* opaque data */
+ 	BlockNumber btpo_prev;
+ 	BlockNumber btpo_next;
+ 	union
+ 	{
+ 		uint32		level;
+ 		TransactionId xact;
+ 	}			btpo;
+ 	uint16		btpo_flags;
+ 	BTCycleId	btpo_cycleid;
+ } BTPageStat;
+ 
+ 
+ /* -------------------------------------------------
+  * GetBTPageStatistics()
+  *
+  * Collect statistics of single b-tree page
+  * -------------------------------------------------
+  */
+ static void
+ GetBTPageStatistics(BlockNumber blkno, Buffer buffer, BTPageStat *stat)
+ {
+ 	Page		page = BufferGetPage(buffer);
+ 	PageHeader	phdr = (PageHeader) page;
+ 	OffsetNumber maxoff = PageGetMaxOffsetNumber(page);
+ 	BTPageOpaque opaque = (BTPageOpaque) PageGetSpecialPointer(page);
+ 	int			item_size = 0;
+ 	int			off;
+ 
+ 	stat->blkno = blkno;
+ 
+ 	stat->max_avail = BLCKSZ - (BLCKSZ - phdr->pd_special + SizeOfPageHeaderData);
+ 
+ 	stat->dead_items = stat->live_items = 0;
+ 
+ 	stat->page_size = PageGetPageSize(page);
+ 
+ 	/* page type (flags) */
+ 	if (P_ISDELETED(opaque))
+ 	{
+ 		stat->type = 'd';
+ 		stat->btpo.xact = opaque->btpo.xact;
+ 		return;
+ 	}
+ 	else if (P_IGNORE(opaque))
+ 		stat->type = 'e';
+ 	else if (P_ISLEAF(opaque))
+ 		stat->type = 'l';
+ 	else if (P_ISROOT(opaque))
+ 		stat->type = 'r';
+ 	else
+ 		stat->type = 'i';
+ 
+ 	/* btpage opaque data */
+ 	stat->btpo_prev = opaque->btpo_prev;
+ 	stat->btpo_next = opaque->btpo_next;
+ 	stat->btpo.level = opaque->btpo.level;
+ 	stat->btpo_flags = opaque->btpo_flags;
+ 	stat->btpo_cycleid = opaque->btpo_cycleid;
+ 
+ 	/* count live and dead tuples, and free space */
+ 	for (off = FirstOffsetNumber; off <= maxoff; off++)
+ 	{
+ 		IndexTuple	itup;
+ 
+ 		ItemId		id = PageGetItemId(page, off);
+ 
+ 		itup = (IndexTuple) PageGetItem(page, id);
+ 
+ 		item_size += IndexTupleSize(itup);
+ 
+ 		if (!ItemIdIsDead(id))
+ 			stat->live_items++;
+ 		else
+ 			stat->dead_items++;
+ 	}
+ 	stat->free_size = PageGetFreeSpace(page);
+ 
+ 	if ((stat->live_items + stat->dead_items) > 0)
+ 		stat->avg_item_size = item_size / (stat->live_items + stat->dead_items);
+ 	else
+ 		stat->avg_item_size = 0;
+ }
+ 
+ /* -----------------------------------------------
+  * bt_page()
+  *
+  * Usage: SELECT * FROM bt_page('t1_pkey', 1);
+  * -----------------------------------------------
+  */
+ Datum
+ bt_page_stats(PG_FUNCTION_ARGS)
+ {
+ 	text	   *relname = PG_GETARG_TEXT_P(0);
+ 	uint32		blkno = PG_GETARG_UINT32(1);
+ 	Buffer		buffer;
+ 	Relation	rel;
+ 	RangeVar   *relrv;
+ 	Datum		result;
+ 	HeapTuple	tuple;
+ 	TupleDesc	tupleDesc;
+ 	int			j;
+ 	char	   *values[11];
+ 	BTPageStat	stat;
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ 				 (errmsg("must be superuser to use pageinspect functions"))));
+ 
+ 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+ 	rel = relation_openrv(relrv, AccessShareLock);
+ 
+ 	if (!IS_INDEX(rel) || !IS_BTREE(rel))
+ 		elog(ERROR, "relation \"%s\" is not a btree index",
+ 			 RelationGetRelationName(rel));
+ 
+ 	/*
+ 	 * Reject attempts to read non-local temporary relations; we would be
+ 	 * likely to get wrong data since we have no visibility into the owning
+ 	 * session's local buffers.
+ 	 */
+ 	if (RELATION_IS_OTHER_TEMP(rel))
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ 				 errmsg("cannot access temporary tables of other sessions")));
+ 
+ 	if (blkno == 0)
+ 		elog(ERROR, "block 0 is a meta page");
+ 
+ 	CHECK_RELATION_BLOCK_RANGE(rel, blkno);
+ 
+ 	buffer = ReadBuffer(rel, blkno);
+ 
+ 	/* keep compiler quiet */
+ 	stat.btpo_prev = stat.btpo_next = InvalidBlockNumber;
+ 	stat.btpo_flags = stat.free_size = stat.avg_item_size = 0;
+ 
+ 	GetBTPageStatistics(blkno, buffer, &stat);
+ 
+ 	/* Build a tuple descriptor for our result type */
+ 	if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+ 		elog(ERROR, "return type must be a row type");
+ 
+ 	j = 0;
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", stat.blkno);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%c", stat.type);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", stat.live_items);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", stat.dead_items);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", stat.avg_item_size);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", stat.page_size);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", stat.free_size);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", stat.btpo_prev);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", stat.btpo_next);
+ 	values[j] = palloc(32);
+ 	if (stat.type == 'd')
+ 		snprintf(values[j++], 32, "%d", stat.btpo.xact);
+ 	else
+ 		snprintf(values[j++], 32, "%d", stat.btpo.level);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", stat.btpo_flags);
+ 
+ 	tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
+ 								   values);
+ 
+ 	result = HeapTupleGetDatum(tuple);
+ 
+ 	ReleaseBuffer(buffer);
+ 
+ 	relation_close(rel, AccessShareLock);
+ 
+ 	PG_RETURN_DATUM(result);
+ }
+ 
+ /*-------------------------------------------------------
+  * bt_page_items()
+  *
+  * Get IndexTupleData set in a btree page
+  *
+  * Usage: SELECT * FROM bt_page_items('t1_pkey', 1);
+  *-------------------------------------------------------
+  */
+ 
+ /*
+  * cross-call data structure for SRF
+  */
+ struct user_args
+ {
+ 	Page		page;
+ 	OffsetNumber offset;
+ };
+ 
+ Datum
+ bt_page_items(PG_FUNCTION_ARGS)
+ {
+ 	text	   *relname = PG_GETARG_TEXT_P(0);
+ 	uint32		blkno = PG_GETARG_UINT32(1);
+ 	Datum		result;
+ 	char	   *values[6];
+ 	HeapTuple	tuple;
+ 	FuncCallContext *fctx;
+ 	MemoryContext mctx;
+ 	struct user_args *uargs;
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ 				 (errmsg("must be superuser to use pageinspect functions"))));
+ 
+ 	if (SRF_IS_FIRSTCALL())
+ 	{
+ 		RangeVar   *relrv;
+ 		Relation	rel;
+ 		Buffer		buffer;
+ 		BTPageOpaque opaque;
+ 		TupleDesc	tupleDesc;
+ 
+ 		fctx = SRF_FIRSTCALL_INIT();
+ 
+ 		relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+ 		rel = relation_openrv(relrv, AccessShareLock);
+ 
+ 		if (!IS_INDEX(rel) || !IS_BTREE(rel))
+ 			elog(ERROR, "relation \"%s\" is not a btree index",
+ 				 RelationGetRelationName(rel));
+ 
+ 		/*
+ 		 * Reject attempts to read non-local temporary relations; we would be
+ 		 * likely to get wrong data since we have no visibility into the
+ 		 * owning session's local buffers.
+ 		 */
+ 		if (RELATION_IS_OTHER_TEMP(rel))
+ 			ereport(ERROR,
+ 					(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ 				errmsg("cannot access temporary tables of other sessions")));
+ 
+ 		if (blkno == 0)
+ 			elog(ERROR, "block 0 is a meta page");
+ 
+ 		CHECK_RELATION_BLOCK_RANGE(rel, blkno);
+ 
+ 		buffer = ReadBuffer(rel, blkno);
+ 
+ 		/*
+ 		 * We copy the page into local storage to avoid holding pin on the
+ 		 * buffer longer than we must, and possibly failing to release it at
+ 		 * all if the calling query doesn't fetch all rows.
+ 		 */
+ 		mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
+ 
+ 		uargs = palloc(sizeof(struct user_args));
+ 
+ 		uargs->page = palloc(BLCKSZ);
+ 		memcpy(uargs->page, BufferGetPage(buffer), BLCKSZ);
+ 
+ 		ReleaseBuffer(buffer);
+ 		relation_close(rel, AccessShareLock);
+ 
+ 		uargs->offset = FirstOffsetNumber;
+ 
+ 		opaque = (BTPageOpaque) PageGetSpecialPointer(uargs->page);
+ 
+ 		if (P_ISDELETED(opaque))
+ 			elog(NOTICE, "page is deleted");
+ 
+ 		fctx->max_calls = PageGetMaxOffsetNumber(uargs->page);
+ 
+ 		/* Build a tuple descriptor for our result type */
+ 		if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+ 			elog(ERROR, "return type must be a row type");
+ 
+ 		fctx->attinmeta = TupleDescGetAttInMetadata(tupleDesc);
+ 
+ 		fctx->user_fctx = uargs;
+ 
+ 		MemoryContextSwitchTo(mctx);
+ 	}
+ 
+ 	fctx = SRF_PERCALL_SETUP();
+ 	uargs = fctx->user_fctx;
+ 
+ 	if (fctx->call_cntr < fctx->max_calls)
+ 	{
+ 		ItemId		id;
+ 		IndexTuple	itup;
+ 		int			j;
+ 		int			off;
+ 		int			dlen;
+ 		char	   *dump;
+ 		char	   *ptr;
+ 
+ 		id = PageGetItemId(uargs->page, uargs->offset);
+ 
+ 		if (!ItemIdIsValid(id))
+ 			elog(ERROR, "invalid ItemId");
+ 
+ 		itup = (IndexTuple) PageGetItem(uargs->page, id);
+ 
+ 		j = 0;
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, "%d", uargs->offset);
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, "(%u,%u)",
+ 				 BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
+ 				 itup->t_tid.ip_posid);
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, "%d", (int) IndexTupleSize(itup));
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, "%c", IndexTupleHasNulls(itup) ? 't' : 'f');
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, "%c", IndexTupleHasVarwidths(itup) ? 't' : 'f');
+ 
+ 		ptr = (char *) itup + IndexInfoFindDataOffset(itup->t_info);
+ 		dlen = IndexTupleSize(itup) - IndexInfoFindDataOffset(itup->t_info);
+ 		dump = palloc0(dlen * 3 + 1);
+ 		values[j] = dump;
+ 		for (off = 0; off < dlen; off++)
+ 		{
+ 			if (off > 0)
+ 				*dump++ = ' ';
+ 			sprintf(dump, "%02x", *(ptr + off) & 0xff);
+ 			dump += 2;
+ 		}
+ 
+ 		tuple = BuildTupleFromCStrings(fctx->attinmeta, values);
+ 		result = HeapTupleGetDatum(tuple);
+ 
+ 		uargs->offset = uargs->offset + 1;
+ 
+ 		SRF_RETURN_NEXT(fctx, result);
+ 	}
+ 	else
+ 	{
+ 		pfree(uargs->page);
+ 		pfree(uargs);
+ 		SRF_RETURN_DONE(fctx);
+ 	}
+ }
+ 
+ 
+ /* ------------------------------------------------
+  * bt_metap()
+  *
+  * Get a btree's meta-page information
+  *
+  * Usage: SELECT * FROM bt_metap('t1_pkey')
+  * ------------------------------------------------
+  */
+ Datum
+ bt_metap(PG_FUNCTION_ARGS)
+ {
+ 	text	   *relname = PG_GETARG_TEXT_P(0);
+ 	Datum		result;
+ 	Relation	rel;
+ 	RangeVar   *relrv;
+ 	BTMetaPageData *metad;
+ 	TupleDesc	tupleDesc;
+ 	int			j;
+ 	char	   *values[6];
+ 	Buffer		buffer;
+ 	Page		page;
+ 	HeapTuple	tuple;
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ 				 (errmsg("must be superuser to use pageinspect functions"))));
+ 
+ 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+ 	rel = relation_openrv(relrv, AccessShareLock);
+ 
+ 	if (!IS_INDEX(rel) || !IS_BTREE(rel))
+ 		elog(ERROR, "relation \"%s\" is not a btree index",
+ 			 RelationGetRelationName(rel));
+ 
+ 	/*
+ 	 * Reject attempts to read non-local temporary relations; we would be
+ 	 * likely to get wrong data since we have no visibility into the owning
+ 	 * session's local buffers.
+ 	 */
+ 	if (RELATION_IS_OTHER_TEMP(rel))
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ 				 errmsg("cannot access temporary tables of other sessions")));
+ 
+ 	buffer = ReadBuffer(rel, 0);
+ 	page = BufferGetPage(buffer);
+ 	metad = BTPageGetMeta(page);
+ 
+ 	/* Build a tuple descriptor for our result type */
+ 	if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+ 		elog(ERROR, "return type must be a row type");
+ 
+ 	j = 0;
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", metad->btm_magic);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", metad->btm_version);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", metad->btm_root);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", metad->btm_level);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", metad->btm_fastroot);
+ 	values[j] = palloc(32);
+ 	snprintf(values[j++], 32, "%d", metad->btm_fastlevel);
+ 
+ 	tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
+ 								   values);
+ 
+ 	result = HeapTupleGetDatum(tuple);
+ 
+ 	ReleaseBuffer(buffer);
+ 
+ 	relation_close(rel, AccessShareLock);
+ 
+ 	PG_RETURN_DATUM(result);
+ }
diff --git a/src/extension/pageinspect/fsmfuncs.c b/src/extension/pageinspect/fsmfuncs.c
index ...a2664d6 .
*** a/src/extension/pageinspect/fsmfuncs.c
--- b/src/extension/pageinspect/fsmfuncs.c
***************
*** 0 ****
--- 1,58 ----
+ /*-------------------------------------------------------------------------
+  *
+  * fsmfuncs.c
+  *	  Functions to investigate FSM pages
+  *
+  * These functions are restricted to superusers for the fear of introducing
+  * security holes if the input checking isn't as water-tight as it should.
+  * You'd need to be superuser to obtain a raw page image anyway, so
+  * there's hardly any use case for using these without superuser-rights
+  * anyway.
+  *
+  * Copyright (c) 2007-2011, PostgreSQL Global Development Group
+  *
+  * IDENTIFICATION
+  *	  src/extension/pageinspect/fsmfuncs.c
+  *
+  *-------------------------------------------------------------------------
+  */
+ 
+ #include "postgres.h"
+ #include "storage/fsm_internals.h"
+ #include "utils/builtins.h"
+ #include "miscadmin.h"
+ #include "funcapi.h"
+ 
+ Datum		fsm_page_contents(PG_FUNCTION_ARGS);
+ 
+ /*
+  * Dumps the contents of a FSM page.
+  */
+ PG_FUNCTION_INFO_V1(fsm_page_contents);
+ 
+ Datum
+ fsm_page_contents(PG_FUNCTION_ARGS)
+ {
+ 	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
+ 	StringInfoData sinfo;
+ 	FSMPage		fsmpage;
+ 	int			i;
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ 				 (errmsg("must be superuser to use raw page functions"))));
+ 
+ 	fsmpage = (FSMPage) PageGetContents(VARDATA(raw_page));
+ 
+ 	initStringInfo(&sinfo);
+ 
+ 	for (i = 0; i < NodesPerPage; i++)
+ 	{
+ 		if (fsmpage->fp_nodes[i] != 0)
+ 			appendStringInfo(&sinfo, "%d: %d\n", i, fsmpage->fp_nodes[i]);
+ 	}
+ 	appendStringInfo(&sinfo, "fp_next_slot: %d\n", fsmpage->fp_next_slot);
+ 
+ 	PG_RETURN_TEXT_P(cstring_to_text(sinfo.data));
+ }
diff --git a/src/extension/pageinspect/heapfuncs.c b/src/extension/pageinspect/heapfuncs.c
index ...6593f5b .
*** a/src/extension/pageinspect/heapfuncs.c
--- b/src/extension/pageinspect/heapfuncs.c
***************
*** 0 ****
--- 1,225 ----
+ /*-------------------------------------------------------------------------
+  *
+  * heapfuncs.c
+  *	  Functions to investigate heap pages
+  *
+  * We check the input to these functions for corrupt pointers etc. that
+  * might cause crashes, but at the same time we try to print out as much
+  * information as possible, even if it's nonsense. That's because if a
+  * page is corrupt, we don't know why and how exactly it is corrupt, so we
+  * let the user judge it.
+  *
+  * These functions are restricted to superusers for the fear of introducing
+  * security holes if the input checking isn't as water-tight as it should be.
+  * You'd need to be superuser to obtain a raw page image anyway, so
+  * there's hardly any use case for using these without superuser-rights
+  * anyway.
+  *
+  * Copyright (c) 2007-2011, PostgreSQL Global Development Group
+  *
+  * IDENTIFICATION
+  *	  src/extension/pageinspect/heapfuncs.c
+  *
+  *-------------------------------------------------------------------------
+  */
+ 
+ #include "postgres.h"
+ 
+ #include "funcapi.h"
+ #include "utils/builtins.h"
+ #include "miscadmin.h"
+ 
+ Datum		heap_page_items(PG_FUNCTION_ARGS);
+ 
+ 
+ /*
+  * bits_to_text
+  *
+  * Converts a bits8-array of 'len' bits to a human-readable
+  * c-string representation.
+  */
+ static char *
+ bits_to_text(bits8 *bits, int len)
+ {
+ 	int			i;
+ 	char	   *str;
+ 
+ 	str = palloc(len + 1);
+ 
+ 	for (i = 0; i < len; i++)
+ 		str[i] = (bits[(i / 8)] & (1 << (i % 8))) ? '1' : '0';
+ 
+ 	str[i] = '\0';
+ 
+ 	return str;
+ }
+ 
+ 
+ /*
+  * heap_page_items
+  *
+  * Allows inspection of line pointers and tuple headers of a heap page.
+  */
+ PG_FUNCTION_INFO_V1(heap_page_items);
+ 
+ typedef struct heap_page_items_state
+ {
+ 	TupleDesc	tupd;
+ 	Page		page;
+ 	uint16		offset;
+ } heap_page_items_state;
+ 
+ Datum
+ heap_page_items(PG_FUNCTION_ARGS)
+ {
+ 	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
+ 	heap_page_items_state *inter_call_data = NULL;
+ 	FuncCallContext *fctx;
+ 	int			raw_page_size;
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ 				 (errmsg("must be superuser to use raw page functions"))));
+ 
+ 	raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
+ 
+ 	if (SRF_IS_FIRSTCALL())
+ 	{
+ 		TupleDesc	tupdesc;
+ 		MemoryContext mctx;
+ 
+ 		if (raw_page_size < SizeOfPageHeaderData)
+ 			ereport(ERROR,
+ 					(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ 				  errmsg("input page too small (%d bytes)", raw_page_size)));
+ 
+ 		fctx = SRF_FIRSTCALL_INIT();
+ 		mctx = MemoryContextSwitchTo(fctx->multi_call_memory_ctx);
+ 
+ 		inter_call_data = palloc(sizeof(heap_page_items_state));
+ 
+ 		/* Build a tuple descriptor for our result type */
+ 		if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+ 			elog(ERROR, "return type must be a row type");
+ 
+ 		inter_call_data->tupd = tupdesc;
+ 
+ 		inter_call_data->offset = FirstOffsetNumber;
+ 		inter_call_data->page = VARDATA(raw_page);
+ 
+ 		fctx->max_calls = PageGetMaxOffsetNumber(inter_call_data->page);
+ 		fctx->user_fctx = inter_call_data;
+ 
+ 		MemoryContextSwitchTo(mctx);
+ 	}
+ 
+ 	fctx = SRF_PERCALL_SETUP();
+ 	inter_call_data = fctx->user_fctx;
+ 
+ 	if (fctx->call_cntr < fctx->max_calls)
+ 	{
+ 		Page		page = inter_call_data->page;
+ 		HeapTuple	resultTuple;
+ 		Datum		result;
+ 		ItemId		id;
+ 		Datum		values[13];
+ 		bool		nulls[13];
+ 		uint16		lp_offset;
+ 		uint16		lp_flags;
+ 		uint16		lp_len;
+ 
+ 		memset(nulls, 0, sizeof(nulls));
+ 
+ 		/* Extract information from the line pointer */
+ 
+ 		id = PageGetItemId(page, inter_call_data->offset);
+ 
+ 		lp_offset = ItemIdGetOffset(id);
+ 		lp_flags = ItemIdGetFlags(id);
+ 		lp_len = ItemIdGetLength(id);
+ 
+ 		values[0] = UInt16GetDatum(inter_call_data->offset);
+ 		values[1] = UInt16GetDatum(lp_offset);
+ 		values[2] = UInt16GetDatum(lp_flags);
+ 		values[3] = UInt16GetDatum(lp_len);
+ 
+ 		/*
+ 		 * We do just enough validity checking to make sure we don't reference
+ 		 * data outside the page passed to us. The page could be corrupt in
+ 		 * many other ways, but at least we won't crash.
+ 		 */
+ 		if (ItemIdHasStorage(id) &&
+ 			lp_len >= sizeof(HeapTupleHeader) &&
+ 			lp_offset == MAXALIGN(lp_offset) &&
+ 			lp_offset + lp_len <= raw_page_size)
+ 		{
+ 			HeapTupleHeader tuphdr;
+ 			int			bits_len;
+ 
+ 			/* Extract information from the tuple header */
+ 
+ 			tuphdr = (HeapTupleHeader) PageGetItem(page, id);
+ 
+ 			values[4] = UInt32GetDatum(HeapTupleHeaderGetXmin(tuphdr));
+ 			values[5] = UInt32GetDatum(HeapTupleHeaderGetXmax(tuphdr));
+ 			values[6] = UInt32GetDatum(HeapTupleHeaderGetRawCommandId(tuphdr)); /* shared with xvac */
+ 			values[7] = PointerGetDatum(&tuphdr->t_ctid);
+ 			values[8] = UInt32GetDatum(tuphdr->t_infomask2);
+ 			values[9] = UInt32GetDatum(tuphdr->t_infomask);
+ 			values[10] = UInt8GetDatum(tuphdr->t_hoff);
+ 
+ 			/*
+ 			 * We already checked that the item as is completely within the
+ 			 * raw page passed to us, with the length given in the line
+ 			 * pointer.. Let's check that t_hoff doesn't point over lp_len,
+ 			 * before using it to access t_bits and oid.
+ 			 */
+ 			if (tuphdr->t_hoff >= sizeof(HeapTupleHeader) &&
+ 				tuphdr->t_hoff <= lp_len)
+ 			{
+ 				if (tuphdr->t_infomask & HEAP_HASNULL)
+ 				{
+ 					bits_len = tuphdr->t_hoff -
+ 						(((char *) tuphdr->t_bits) -((char *) tuphdr));
+ 
+ 					values[11] = CStringGetTextDatum(
+ 								 bits_to_text(tuphdr->t_bits, bits_len * 8));
+ 				}
+ 				else
+ 					nulls[11] = true;
+ 
+ 				if (tuphdr->t_infomask & HEAP_HASOID)
+ 					values[12] = HeapTupleHeaderGetOid(tuphdr);
+ 				else
+ 					nulls[12] = true;
+ 			}
+ 			else
+ 			{
+ 				nulls[11] = true;
+ 				nulls[12] = true;
+ 			}
+ 		}
+ 		else
+ 		{
+ 			/*
+ 			 * The line pointer is not used, or it's invalid. Set the rest of
+ 			 * the fields to NULL
+ 			 */
+ 			int			i;
+ 
+ 			for (i = 4; i <= 12; i++)
+ 				nulls[i] = true;
+ 		}
+ 
+ 		/* Build and return the result tuple. */
+ 		resultTuple = heap_form_tuple(inter_call_data->tupd, values, nulls);
+ 		result = HeapTupleGetDatum(resultTuple);
+ 
+ 		inter_call_data->offset++;
+ 
+ 		SRF_RETURN_NEXT(fctx, result);
+ 	}
+ 	else
+ 		SRF_RETURN_DONE(fctx);
+ }
diff --git a/src/extension/pageinspect/pageinspect--1.0.sql b/src/extension/pageinspect/pageinspect--1.0.sql
index ...ad451ae .
*** a/src/extension/pageinspect/pageinspect--1.0.sql
--- b/src/extension/pageinspect/pageinspect--1.0.sql
***************
*** 0 ****
--- 1,107 ----
+ /* src/extension/pageinspect/pageinspect--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pageinspect" to load this file. \quit
+ 
+ --
+ -- get_raw_page()
+ --
+ CREATE FUNCTION get_raw_page(text, int4)
+ RETURNS bytea
+ AS 'MODULE_PATHNAME', 'get_raw_page'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION get_raw_page(text, text, int4)
+ RETURNS bytea
+ AS 'MODULE_PATHNAME', 'get_raw_page_fork'
+ LANGUAGE C STRICT;
+ 
+ --
+ -- page_header()
+ --
+ CREATE FUNCTION page_header(IN page bytea,
+     OUT lsn text,
+     OUT tli smallint,
+     OUT flags smallint,
+     OUT lower smallint,
+     OUT upper smallint,
+     OUT special smallint,
+     OUT pagesize smallint,
+     OUT version smallint,
+     OUT prune_xid xid)
+ AS 'MODULE_PATHNAME', 'page_header'
+ LANGUAGE C STRICT;
+ 
+ --
+ -- heap_page_items()
+ --
+ CREATE FUNCTION heap_page_items(IN page bytea,
+     OUT lp smallint,
+     OUT lp_off smallint,
+     OUT lp_flags smallint,
+     OUT lp_len smallint,
+     OUT t_xmin xid,
+     OUT t_xmax xid,
+     OUT t_field3 int4,
+     OUT t_ctid tid,
+     OUT t_infomask2 integer,
+     OUT t_infomask integer,
+     OUT t_hoff smallint,
+     OUT t_bits text,
+     OUT t_oid oid)
+ RETURNS SETOF record
+ AS 'MODULE_PATHNAME', 'heap_page_items'
+ LANGUAGE C STRICT;
+ 
+ --
+ -- bt_metap()
+ --
+ CREATE FUNCTION bt_metap(IN relname text,
+     OUT magic int4,
+     OUT version int4,
+     OUT root int4,
+     OUT level int4,
+     OUT fastroot int4,
+     OUT fastlevel int4)
+ AS 'MODULE_PATHNAME', 'bt_metap'
+ LANGUAGE C STRICT;
+ 
+ --
+ -- bt_page_stats()
+ --
+ CREATE FUNCTION bt_page_stats(IN relname text, IN blkno int4,
+     OUT blkno int4,
+     OUT type "char",
+     OUT live_items int4,
+     OUT dead_items int4,
+     OUT avg_item_size int4,
+     OUT page_size int4,
+     OUT free_size int4,
+     OUT btpo_prev int4,
+     OUT btpo_next int4,
+     OUT btpo int4,
+     OUT btpo_flags int4)
+ AS 'MODULE_PATHNAME', 'bt_page_stats'
+ LANGUAGE C STRICT;
+ 
+ --
+ -- bt_page_items()
+ --
+ CREATE FUNCTION bt_page_items(IN relname text, IN blkno int4,
+     OUT itemoffset smallint,
+     OUT ctid tid,
+     OUT itemlen smallint,
+     OUT nulls bool,
+     OUT vars bool,
+     OUT data text)
+ RETURNS SETOF record
+ AS 'MODULE_PATHNAME', 'bt_page_items'
+ LANGUAGE C STRICT;
+ 
+ --
+ -- fsm_page_contents()
+ --
+ CREATE FUNCTION fsm_page_contents(IN page bytea)
+ RETURNS text
+ AS 'MODULE_PATHNAME', 'fsm_page_contents'
+ LANGUAGE C STRICT;
diff --git a/src/extension/pageinspect/pageinspect--unpackaged--1.0.sql b/src/extension/pageinspect/pageinspect--unpackaged--1.0.sql
index ...3f90fd9 .
*** a/src/extension/pageinspect/pageinspect--unpackaged--1.0.sql
--- b/src/extension/pageinspect/pageinspect--unpackaged--1.0.sql
***************
*** 0 ****
--- 1,31 ----
+ /* src/extension/pageinspect/pageinspect--unpackaged--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pageinspect" to load this file. \quit
+ 
+ DROP FUNCTION heap_page_items(bytea);
+ CREATE FUNCTION heap_page_items(IN page bytea,
+ 	OUT lp smallint,
+ 	OUT lp_off smallint,
+ 	OUT lp_flags smallint,
+ 	OUT lp_len smallint,
+ 	OUT t_xmin xid,
+ 	OUT t_xmax xid,
+ 	OUT t_field3 int4,
+ 	OUT t_ctid tid,
+ 	OUT t_infomask2 integer,
+ 	OUT t_infomask integer,
+ 	OUT t_hoff smallint,
+ 	OUT t_bits text,
+ 	OUT t_oid oid)
+ RETURNS SETOF record
+ AS 'MODULE_PATHNAME', 'heap_page_items'
+ LANGUAGE C STRICT;
+ 
+ ALTER EXTENSION pageinspect ADD function get_raw_page(text,integer);
+ ALTER EXTENSION pageinspect ADD function get_raw_page(text,text,integer);
+ ALTER EXTENSION pageinspect ADD function page_header(bytea);
+ ALTER EXTENSION pageinspect ADD function bt_metap(text);
+ ALTER EXTENSION pageinspect ADD function bt_page_stats(text,integer);
+ ALTER EXTENSION pageinspect ADD function bt_page_items(text,integer);
+ ALTER EXTENSION pageinspect ADD function fsm_page_contents(bytea);
diff --git a/src/extension/pageinspect/pageinspect.control b/src/extension/pageinspect/pageinspect.control
index ...f9da0e8 .
*** a/src/extension/pageinspect/pageinspect.control
--- b/src/extension/pageinspect/pageinspect.control
***************
*** 0 ****
--- 1,5 ----
+ # pageinspect extension
+ comment = 'inspect the contents of database pages at a low level'
+ default_version = '1.0'
+ module_pathname = '$libdir/pageinspect'
+ relocatable = true
diff --git a/src/extension/pageinspect/rawpage.c b/src/extension/pageinspect/rawpage.c
index ...466fc3a .
*** a/src/extension/pageinspect/rawpage.c
--- b/src/extension/pageinspect/rawpage.c
***************
*** 0 ****
--- 1,229 ----
+ /*-------------------------------------------------------------------------
+  *
+  * rawpage.c
+  *	  Functions to extract a raw page as bytea and inspect it
+  *
+  * Access-method specific inspection functions are in separate files.
+  *
+  * Copyright (c) 2007-2011, PostgreSQL Global Development Group
+  *
+  * IDENTIFICATION
+  *	  src/extension/pageinspect/rawpage.c
+  *
+  *-------------------------------------------------------------------------
+  */
+ 
+ #include "postgres.h"
+ 
+ #include "catalog/catalog.h"
+ #include "catalog/namespace.h"
+ #include "funcapi.h"
+ #include "miscadmin.h"
+ #include "storage/bufmgr.h"
+ #include "utils/builtins.h"
+ #include "utils/rel.h"
+ 
+ PG_MODULE_MAGIC;
+ 
+ Datum		get_raw_page(PG_FUNCTION_ARGS);
+ Datum		get_raw_page_fork(PG_FUNCTION_ARGS);
+ Datum		page_header(PG_FUNCTION_ARGS);
+ 
+ static bytea *get_raw_page_internal(text *relname, ForkNumber forknum,
+ 					  BlockNumber blkno);
+ 
+ 
+ /*
+  * get_raw_page
+  *
+  * Returns a copy of a page from shared buffers as a bytea
+  */
+ PG_FUNCTION_INFO_V1(get_raw_page);
+ 
+ Datum
+ get_raw_page(PG_FUNCTION_ARGS)
+ {
+ 	text	   *relname = PG_GETARG_TEXT_P(0);
+ 	uint32		blkno = PG_GETARG_UINT32(1);
+ 	bytea	   *raw_page;
+ 
+ 	/*
+ 	 * We don't normally bother to check the number of arguments to a C
+ 	 * function, but here it's needed for safety because early 8.4 beta
+ 	 * releases mistakenly redefined get_raw_page() as taking three arguments.
+ 	 */
+ 	if (PG_NARGS() != 2)
+ 		ereport(ERROR,
+ 				(errmsg("wrong number of arguments to get_raw_page()"),
+ 				 errhint("Run the updated pageinspect.sql script.")));
+ 
+ 	raw_page = get_raw_page_internal(relname, MAIN_FORKNUM, blkno);
+ 
+ 	PG_RETURN_BYTEA_P(raw_page);
+ }
+ 
+ /*
+  * get_raw_page_fork
+  *
+  * Same, for any fork
+  */
+ PG_FUNCTION_INFO_V1(get_raw_page_fork);
+ 
+ Datum
+ get_raw_page_fork(PG_FUNCTION_ARGS)
+ {
+ 	text	   *relname = PG_GETARG_TEXT_P(0);
+ 	text	   *forkname = PG_GETARG_TEXT_P(1);
+ 	uint32		blkno = PG_GETARG_UINT32(2);
+ 	bytea	   *raw_page;
+ 	ForkNumber	forknum;
+ 
+ 	forknum = forkname_to_number(text_to_cstring(forkname));
+ 
+ 	raw_page = get_raw_page_internal(relname, forknum, blkno);
+ 
+ 	PG_RETURN_BYTEA_P(raw_page);
+ }
+ 
+ /*
+  * workhorse
+  */
+ static bytea *
+ get_raw_page_internal(text *relname, ForkNumber forknum, BlockNumber blkno)
+ {
+ 	bytea	   *raw_page;
+ 	RangeVar   *relrv;
+ 	Relation	rel;
+ 	char	   *raw_page_data;
+ 	Buffer		buf;
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ 				 (errmsg("must be superuser to use raw functions"))));
+ 
+ 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+ 	rel = relation_openrv(relrv, AccessShareLock);
+ 
+ 	/* Check that this relation has storage */
+ 	if (rel->rd_rel->relkind == RELKIND_VIEW)
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ 				 errmsg("cannot get raw page from view \"%s\"",
+ 						RelationGetRelationName(rel))));
+ 	if (rel->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ 				 errmsg("cannot get raw page from composite type \"%s\"",
+ 						RelationGetRelationName(rel))));
+ 	if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+ 				 errmsg("cannot get raw page from foreign table \"%s\"",
+ 						RelationGetRelationName(rel))));
+ 
+ 	/*
+ 	 * Reject attempts to read non-local temporary relations; we would be
+ 	 * likely to get wrong data since we have no visibility into the owning
+ 	 * session's local buffers.
+ 	 */
+ 	if (RELATION_IS_OTHER_TEMP(rel))
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ 				 errmsg("cannot access temporary tables of other sessions")));
+ 
+ 	if (blkno >= RelationGetNumberOfBlocks(rel))
+ 		elog(ERROR, "block number %u is out of range for relation \"%s\"",
+ 			 blkno, RelationGetRelationName(rel));
+ 
+ 	/* Initialize buffer to copy to */
+ 	raw_page = (bytea *) palloc(BLCKSZ + VARHDRSZ);
+ 	SET_VARSIZE(raw_page, BLCKSZ + VARHDRSZ);
+ 	raw_page_data = VARDATA(raw_page);
+ 
+ 	/* Take a verbatim copy of the page */
+ 
+ 	buf = ReadBufferExtended(rel, forknum, blkno, RBM_NORMAL, NULL);
+ 	LockBuffer(buf, BUFFER_LOCK_SHARE);
+ 
+ 	memcpy(raw_page_data, BufferGetPage(buf), BLCKSZ);
+ 
+ 	LockBuffer(buf, BUFFER_LOCK_UNLOCK);
+ 	ReleaseBuffer(buf);
+ 
+ 	relation_close(rel, AccessShareLock);
+ 
+ 	return raw_page;
+ }
+ 
+ /*
+  * page_header
+  *
+  * Allows inspection of page header fields of a raw page
+  */
+ 
+ PG_FUNCTION_INFO_V1(page_header);
+ 
+ Datum
+ page_header(PG_FUNCTION_ARGS)
+ {
+ 	bytea	   *raw_page = PG_GETARG_BYTEA_P(0);
+ 	int			raw_page_size;
+ 
+ 	TupleDesc	tupdesc;
+ 
+ 	Datum		result;
+ 	HeapTuple	tuple;
+ 	Datum		values[9];
+ 	bool		nulls[9];
+ 
+ 	PageHeader	page;
+ 	XLogRecPtr	lsn;
+ 	char		lsnchar[64];
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ 				 (errmsg("must be superuser to use raw page functions"))));
+ 
+ 	raw_page_size = VARSIZE(raw_page) - VARHDRSZ;
+ 
+ 	/*
+ 	 * Check that enough data was supplied, so that we don't try to access
+ 	 * fields outside the supplied buffer.
+ 	 */
+ 	if (raw_page_size < sizeof(PageHeaderData))
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ 				 errmsg("input page too small (%d bytes)", raw_page_size)));
+ 
+ 	page = (PageHeader) VARDATA(raw_page);
+ 
+ 	/* Build a tuple descriptor for our result type */
+ 	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+ 		elog(ERROR, "return type must be a row type");
+ 
+ 	/* Extract information from the page header */
+ 
+ 	lsn = PageGetLSN(page);
+ 	snprintf(lsnchar, sizeof(lsnchar), "%X/%X", lsn.xlogid, lsn.xrecoff);
+ 
+ 	values[0] = CStringGetTextDatum(lsnchar);
+ 	values[1] = UInt16GetDatum(PageGetTLI(page));
+ 	values[2] = UInt16GetDatum(page->pd_flags);
+ 	values[3] = UInt16GetDatum(page->pd_lower);
+ 	values[4] = UInt16GetDatum(page->pd_upper);
+ 	values[5] = UInt16GetDatum(page->pd_special);
+ 	values[6] = UInt16GetDatum(PageGetPageSize(page));
+ 	values[7] = UInt16GetDatum(PageGetPageLayoutVersion(page));
+ 	values[8] = TransactionIdGetDatum(page->pd_prune_xid);
+ 
+ 	/* Build and return the tuple. */
+ 
+ 	memset(nulls, 0, sizeof(nulls));
+ 
+ 	tuple = heap_form_tuple(tupdesc, values, nulls);
+ 	result = HeapTupleGetDatum(tuple);
+ 
+ 	PG_RETURN_DATUM(result);
+ }
diff --git a/src/extension/pg_buffercache/Makefile b/src/extension/pg_buffercache/Makefile
index ...9d1a55d .
*** a/src/extension/pg_buffercache/Makefile
--- b/src/extension/pg_buffercache/Makefile
***************
*** 0 ****
--- 1,18 ----
+ # src/extension/pg_buffercache/Makefile
+ 
+ MODULE_big = pg_buffercache
+ OBJS = pg_buffercache_pages.o
+ 
+ EXTENSION = pg_buffercache
+ DATA = pg_buffercache--1.0.sql pg_buffercache--unpackaged--1.0.sql
+ 
+ ifdef USE_PGXS
+ PG_CONFIG = pg_config
+ PGXS := $(shell $(PG_CONFIG) --pgxs)
+ include $(PGXS)
+ else
+ subdir = src/extension/pg_buffercache
+ top_builddir = ../../..
+ include $(top_builddir)/src/Makefile.global
+ include $(top_srcdir)/src/extension/extension-global.mk
+ endif
diff --git a/src/extension/pg_buffercache/pg_buffercache--1.0.sql b/src/extension/pg_buffercache/pg_buffercache--1.0.sql
index ...28e584b .
*** a/src/extension/pg_buffercache/pg_buffercache--1.0.sql
--- b/src/extension/pg_buffercache/pg_buffercache--1.0.sql
***************
*** 0 ****
--- 1,20 ----
+ /* src/extension/pg_buffercache/pg_buffercache--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pg_buffercache" to load this file. \quit
+ 
+ -- Register the function.
+ CREATE FUNCTION pg_buffercache_pages()
+ RETURNS SETOF RECORD
+ AS 'MODULE_PATHNAME', 'pg_buffercache_pages'
+ LANGUAGE C;
+ 
+ -- Create a view for convenient access.
+ CREATE VIEW pg_buffercache AS
+ 	SELECT P.* FROM pg_buffercache_pages() AS P
+ 	(bufferid integer, relfilenode oid, reltablespace oid, reldatabase oid,
+ 	 relforknumber int2, relblocknumber int8, isdirty bool, usagecount int2);
+ 
+ -- Don't want these to be available to public.
+ REVOKE ALL ON FUNCTION pg_buffercache_pages() FROM PUBLIC;
+ REVOKE ALL ON pg_buffercache FROM PUBLIC;
diff --git a/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql b/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
index ...a4e6f74 .
*** a/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
--- b/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
***************
*** 0 ****
--- 1,7 ----
+ /* src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pg_buffercache" to load this file. \quit
+ 
+ ALTER EXTENSION pg_buffercache ADD function pg_buffercache_pages();
+ ALTER EXTENSION pg_buffercache ADD view pg_buffercache;
diff --git a/src/extension/pg_buffercache/pg_buffercache.control b/src/extension/pg_buffercache/pg_buffercache.control
index ...709513c .
*** a/src/extension/pg_buffercache/pg_buffercache.control
--- b/src/extension/pg_buffercache/pg_buffercache.control
***************
*** 0 ****
--- 1,5 ----
+ # pg_buffercache extension
+ comment = 'examine the shared buffer cache'
+ default_version = '1.0'
+ module_pathname = '$libdir/pg_buffercache'
+ relocatable = true
diff --git a/src/extension/pg_buffercache/pg_buffercache_pages.c b/src/extension/pg_buffercache/pg_buffercache_pages.c
index ...d6ef91b .
*** a/src/extension/pg_buffercache/pg_buffercache_pages.c
--- b/src/extension/pg_buffercache/pg_buffercache_pages.c
***************
*** 0 ****
--- 1,217 ----
+ /*-------------------------------------------------------------------------
+  *
+  * pg_buffercache_pages.c
+  *	  display some contents of the buffer cache
+  *
+  *	  src/extension/pg_buffercache/pg_buffercache_pages.c
+  *-------------------------------------------------------------------------
+  */
+ #include "postgres.h"
+ 
+ #include "catalog/pg_type.h"
+ #include "funcapi.h"
+ #include "storage/buf_internals.h"
+ #include "storage/bufmgr.h"
+ 
+ 
+ #define NUM_BUFFERCACHE_PAGES_ELEM	8
+ 
+ PG_MODULE_MAGIC;
+ 
+ Datum		pg_buffercache_pages(PG_FUNCTION_ARGS);
+ 
+ 
+ /*
+  * Record structure holding the to be exposed cache data.
+  */
+ typedef struct
+ {
+ 	uint32		bufferid;
+ 	Oid			relfilenode;
+ 	Oid			reltablespace;
+ 	Oid			reldatabase;
+ 	ForkNumber	forknum;
+ 	BlockNumber blocknum;
+ 	bool		isvalid;
+ 	bool		isdirty;
+ 	uint16		usagecount;
+ } BufferCachePagesRec;
+ 
+ 
+ /*
+  * Function context for data persisting over repeated calls.
+  */
+ typedef struct
+ {
+ 	TupleDesc	tupdesc;
+ 	BufferCachePagesRec *record;
+ } BufferCachePagesContext;
+ 
+ 
+ /*
+  * Function returning data from the shared buffer cache - buffer number,
+  * relation node/tablespace/database/blocknum and dirty indicator.
+  */
+ PG_FUNCTION_INFO_V1(pg_buffercache_pages);
+ 
+ Datum
+ pg_buffercache_pages(PG_FUNCTION_ARGS)
+ {
+ 	FuncCallContext *funcctx;
+ 	Datum		result;
+ 	MemoryContext oldcontext;
+ 	BufferCachePagesContext *fctx;		/* User function context. */
+ 	TupleDesc	tupledesc;
+ 	HeapTuple	tuple;
+ 
+ 	if (SRF_IS_FIRSTCALL())
+ 	{
+ 		int			i;
+ 		volatile BufferDesc *bufHdr;
+ 
+ 		funcctx = SRF_FIRSTCALL_INIT();
+ 
+ 		/* Switch context when allocating stuff to be used in later calls */
+ 		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+ 
+ 		/* Create a user function context for cross-call persistence */
+ 		fctx = (BufferCachePagesContext *) palloc(sizeof(BufferCachePagesContext));
+ 
+ 		/* Construct a tuple descriptor for the result rows. */
+ 		tupledesc = CreateTemplateTupleDesc(NUM_BUFFERCACHE_PAGES_ELEM, false);
+ 		TupleDescInitEntry(tupledesc, (AttrNumber) 1, "bufferid",
+ 						   INT4OID, -1, 0);
+ 		TupleDescInitEntry(tupledesc, (AttrNumber) 2, "relfilenode",
+ 						   OIDOID, -1, 0);
+ 		TupleDescInitEntry(tupledesc, (AttrNumber) 3, "reltablespace",
+ 						   OIDOID, -1, 0);
+ 		TupleDescInitEntry(tupledesc, (AttrNumber) 4, "reldatabase",
+ 						   OIDOID, -1, 0);
+ 		TupleDescInitEntry(tupledesc, (AttrNumber) 5, "relforknumber",
+ 						   INT2OID, -1, 0);
+ 		TupleDescInitEntry(tupledesc, (AttrNumber) 6, "relblocknumber",
+ 						   INT8OID, -1, 0);
+ 		TupleDescInitEntry(tupledesc, (AttrNumber) 7, "isdirty",
+ 						   BOOLOID, -1, 0);
+ 		TupleDescInitEntry(tupledesc, (AttrNumber) 8, "usage_count",
+ 						   INT2OID, -1, 0);
+ 
+ 		fctx->tupdesc = BlessTupleDesc(tupledesc);
+ 
+ 		/* Allocate NBuffers worth of BufferCachePagesRec records. */
+ 		fctx->record = (BufferCachePagesRec *) palloc(sizeof(BufferCachePagesRec) * NBuffers);
+ 
+ 		/* Set max calls and remember the user function context. */
+ 		funcctx->max_calls = NBuffers;
+ 		funcctx->user_fctx = fctx;
+ 
+ 		/* Return to original context when allocating transient memory */
+ 		MemoryContextSwitchTo(oldcontext);
+ 
+ 		/*
+ 		 * To get a consistent picture of the buffer state, we must lock all
+ 		 * partitions of the buffer map.  Needless to say, this is horrible
+ 		 * for concurrency.  Must grab locks in increasing order to avoid
+ 		 * possible deadlocks.
+ 		 */
+ 		for (i = 0; i < NUM_BUFFER_PARTITIONS; i++)
+ 			LWLockAcquire(FirstBufMappingLock + i, LW_SHARED);
+ 
+ 		/*
+ 		 * Scan though all the buffers, saving the relevant fields in the
+ 		 * fctx->record structure.
+ 		 */
+ 		for (i = 0, bufHdr = BufferDescriptors; i < NBuffers; i++, bufHdr++)
+ 		{
+ 			/* Lock each buffer header before inspecting. */
+ 			LockBufHdr(bufHdr);
+ 
+ 			fctx->record[i].bufferid = BufferDescriptorGetBuffer(bufHdr);
+ 			fctx->record[i].relfilenode = bufHdr->tag.rnode.relNode;
+ 			fctx->record[i].reltablespace = bufHdr->tag.rnode.spcNode;
+ 			fctx->record[i].reldatabase = bufHdr->tag.rnode.dbNode;
+ 			fctx->record[i].forknum = bufHdr->tag.forkNum;
+ 			fctx->record[i].blocknum = bufHdr->tag.blockNum;
+ 			fctx->record[i].usagecount = bufHdr->usage_count;
+ 
+ 			if (bufHdr->flags & BM_DIRTY)
+ 				fctx->record[i].isdirty = true;
+ 			else
+ 				fctx->record[i].isdirty = false;
+ 
+ 			/* Note if the buffer is valid, and has storage created */
+ 			if ((bufHdr->flags & BM_VALID) && (bufHdr->flags & BM_TAG_VALID))
+ 				fctx->record[i].isvalid = true;
+ 			else
+ 				fctx->record[i].isvalid = false;
+ 
+ 			UnlockBufHdr(bufHdr);
+ 		}
+ 
+ 		/*
+ 		 * And release locks.  We do this in reverse order for two reasons:
+ 		 * (1) Anyone else who needs more than one of the locks will be trying
+ 		 * to lock them in increasing order; we don't want to release the
+ 		 * other process until it can get all the locks it needs. (2) This
+ 		 * avoids O(N^2) behavior inside LWLockRelease.
+ 		 */
+ 		for (i = NUM_BUFFER_PARTITIONS; --i >= 0;)
+ 			LWLockRelease(FirstBufMappingLock + i);
+ 	}
+ 
+ 	funcctx = SRF_PERCALL_SETUP();
+ 
+ 	/* Get the saved state */
+ 	fctx = funcctx->user_fctx;
+ 
+ 	if (funcctx->call_cntr < funcctx->max_calls)
+ 	{
+ 		uint32		i = funcctx->call_cntr;
+ 		Datum		values[NUM_BUFFERCACHE_PAGES_ELEM];
+ 		bool		nulls[NUM_BUFFERCACHE_PAGES_ELEM];
+ 
+ 		values[0] = Int32GetDatum(fctx->record[i].bufferid);
+ 		nulls[0] = false;
+ 
+ 		/*
+ 		 * Set all fields except the bufferid to null if the buffer is unused
+ 		 * or not valid.
+ 		 */
+ 		if (fctx->record[i].blocknum == InvalidBlockNumber ||
+ 			fctx->record[i].isvalid == false)
+ 		{
+ 			nulls[1] = true;
+ 			nulls[2] = true;
+ 			nulls[3] = true;
+ 			nulls[4] = true;
+ 			nulls[5] = true;
+ 			nulls[6] = true;
+ 			nulls[7] = true;
+ 		}
+ 		else
+ 		{
+ 			values[1] = ObjectIdGetDatum(fctx->record[i].relfilenode);
+ 			nulls[1] = false;
+ 			values[2] = ObjectIdGetDatum(fctx->record[i].reltablespace);
+ 			nulls[2] = false;
+ 			values[3] = ObjectIdGetDatum(fctx->record[i].reldatabase);
+ 			nulls[3] = false;
+ 			values[4] = ObjectIdGetDatum(fctx->record[i].forknum);
+ 			nulls[4] = false;
+ 			values[5] = Int64GetDatum((int64) fctx->record[i].blocknum);
+ 			nulls[5] = false;
+ 			values[6] = BoolGetDatum(fctx->record[i].isdirty);
+ 			nulls[6] = false;
+ 			values[7] = Int16GetDatum(fctx->record[i].usagecount);
+ 			nulls[7] = false;
+ 		}
+ 
+ 		/* Build and return the tuple. */
+ 		tuple = heap_form_tuple(fctx->tupdesc, values, nulls);
+ 		result = HeapTupleGetDatum(tuple);
+ 
+ 		SRF_RETURN_NEXT(funcctx, result);
+ 	}
+ 	else
+ 		SRF_RETURN_DONE(funcctx);
+ }
diff --git a/src/extension/pg_freespacemap/Makefile b/src/extension/pg_freespacemap/Makefile
index ...5d33ffb .
*** a/src/extension/pg_freespacemap/Makefile
--- b/src/extension/pg_freespacemap/Makefile
***************
*** 0 ****
--- 1,18 ----
+ # src/extensions/pg_freespacemap/Makefile
+ 
+ MODULE_big = pg_freespacemap
+ OBJS = pg_freespacemap.o
+ 
+ EXTENSION = pg_freespacemap
+ DATA = pg_freespacemap--1.0.sql pg_freespacemap--unpackaged--1.0.sql
+ 
+ ifdef USE_PGXS
+ PG_CONFIG = pg_config
+ PGXS := $(shell $(PG_CONFIG) --pgxs)
+ include $(PGXS)
+ else
+ subdir = src/extension/pg_freespacemap
+ top_builddir = ../../..
+ include $(top_builddir)/src/Makefile.global
+ include $(top_srcdir)/src/extension/extension-global.mk
+ endif
diff --git a/src/extension/pg_freespacemap/pg_freespacemap--1.0.sql b/src/extension/pg_freespacemap/pg_freespacemap--1.0.sql
index ...616c26e .
*** a/src/extension/pg_freespacemap/pg_freespacemap--1.0.sql
--- b/src/extension/pg_freespacemap/pg_freespacemap--1.0.sql
***************
*** 0 ****
--- 1,25 ----
+ /* src/extension/pg_freespacemap/pg_freespacemap--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pg_freespacemap" to load this file. \quit
+ 
+ -- Register the C function.
+ CREATE FUNCTION pg_freespace(regclass, bigint)
+ RETURNS int2
+ AS 'MODULE_PATHNAME', 'pg_freespace'
+ LANGUAGE C STRICT;
+ 
+ -- pg_freespace shows the recorded space avail at each block in a relation
+ CREATE FUNCTION
+   pg_freespace(rel regclass, blkno OUT bigint, avail OUT int2)
+ RETURNS SETOF RECORD
+ AS $$
+   SELECT blkno, pg_freespace($1, blkno) AS avail
+   FROM generate_series(0, pg_relation_size($1) / current_setting('block_size')::bigint - 1) AS blkno;
+ $$
+ LANGUAGE SQL;
+ 
+ 
+ -- Don't want these to be available to public.
+ REVOKE ALL ON FUNCTION pg_freespace(regclass, bigint) FROM PUBLIC;
+ REVOKE ALL ON FUNCTION pg_freespace(regclass) FROM PUBLIC;
diff --git a/src/extension/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql b/src/extension/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
index ...c1082b4 .
*** a/src/extension/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
--- b/src/extension/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql
***************
*** 0 ****
--- 1,7 ----
+ /* src/extension/pg_freespacemap/pg_freespacemap--unpackaged--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pg_freespacemap" to load this file. \quit
+ 
+ ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass,bigint);
+ ALTER EXTENSION pg_freespacemap ADD function pg_freespace(regclass);
diff --git a/src/extension/pg_freespacemap/pg_freespacemap.c b/src/extension/pg_freespacemap/pg_freespacemap.c
index ...91d76f9 .
*** a/src/extension/pg_freespacemap/pg_freespacemap.c
--- b/src/extension/pg_freespacemap/pg_freespacemap.c
***************
*** 0 ****
--- 1,44 ----
+ /*-------------------------------------------------------------------------
+  *
+  * pg_freespacemap.c
+  *	  display contents of a free space map
+  *
+  *	  src/extension/pg_freespacemap/pg_freespacemap.c
+  *-------------------------------------------------------------------------
+  */
+ #include "postgres.h"
+ 
+ #include "funcapi.h"
+ #include "storage/freespace.h"
+ 
+ 
+ PG_MODULE_MAGIC;
+ 
+ Datum		pg_freespace(PG_FUNCTION_ARGS);
+ 
+ /*
+  * Returns the amount of free space on a given page, according to the
+  * free space map.
+  */
+ PG_FUNCTION_INFO_V1(pg_freespace);
+ 
+ Datum
+ pg_freespace(PG_FUNCTION_ARGS)
+ {
+ 	Oid			relid = PG_GETARG_OID(0);
+ 	int64		blkno = PG_GETARG_INT64(1);
+ 	int16		freespace;
+ 	Relation	rel;
+ 
+ 	rel = relation_open(relid, AccessShareLock);
+ 
+ 	if (blkno < 0 || blkno > MaxBlockNumber)
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ 				 errmsg("invalid block number")));
+ 
+ 	freespace = GetRecordedFreeSpace(rel, blkno);
+ 
+ 	relation_close(rel, AccessShareLock);
+ 	PG_RETURN_INT16(freespace);
+ }
diff --git a/src/extension/pg_freespacemap/pg_freespacemap.control b/src/extension/pg_freespacemap/pg_freespacemap.control
index ...34b695f .
*** a/src/extension/pg_freespacemap/pg_freespacemap.control
--- b/src/extension/pg_freespacemap/pg_freespacemap.control
***************
*** 0 ****
--- 1,5 ----
+ # pg_freespacemap extension
+ comment = 'examine the free space map (FSM)'
+ default_version = '1.0'
+ module_pathname = '$libdir/pg_freespacemap'
+ relocatable = true
diff --git a/src/extension/pg_stat_statements/Makefile b/src/extension/pg_stat_statements/Makefile
index ...ffaebce .
*** a/src/extension/pg_stat_statements/Makefile
--- b/src/extension/pg_stat_statements/Makefile
***************
*** 0 ****
--- 1,18 ----
+ # src/extension/pg_stat_statements/Makefile
+ 
+ MODULE_big = pg_stat_statements
+ OBJS = pg_stat_statements.o
+ 
+ EXTENSION = pg_stat_statements
+ DATA = pg_stat_statements--1.0.sql pg_stat_statements--unpackaged--1.0.sql
+ 
+ ifdef USE_PGXS
+ PG_CONFIG = pg_config
+ PGXS := $(shell $(PG_CONFIG) --pgxs)
+ include $(PGXS)
+ else
+ subdir = src/extension/pg_stat_statements
+ top_builddir = ../../..
+ include $(top_builddir)/src/Makefile.global
+ include $(top_srcdir)/src/extension/extension-global.mk
+ endif
diff --git a/src/extension/pg_stat_statements/pg_stat_statements--1.0.sql b/src/extension/pg_stat_statements/pg_stat_statements--1.0.sql
index ...c2a1206 .
*** a/src/extension/pg_stat_statements/pg_stat_statements--1.0.sql
--- b/src/extension/pg_stat_statements/pg_stat_statements--1.0.sql
***************
*** 0 ****
--- 1,39 ----
+ /* src/extension/pg_stat_statements/pg_stat_statements--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pg_stat_statements" to load this file. \quit
+ 
+ -- Register functions.
+ CREATE FUNCTION pg_stat_statements_reset()
+ RETURNS void
+ AS 'MODULE_PATHNAME'
+ LANGUAGE C;
+ 
+ CREATE FUNCTION pg_stat_statements(
+     OUT userid oid,
+     OUT dbid oid,
+     OUT query text,
+     OUT calls int8,
+     OUT total_time float8,
+     OUT rows int8,
+     OUT shared_blks_hit int8,
+     OUT shared_blks_read int8,
+     OUT shared_blks_written int8,
+     OUT local_blks_hit int8,
+     OUT local_blks_read int8,
+     OUT local_blks_written int8,
+     OUT temp_blks_read int8,
+     OUT temp_blks_written int8
+ )
+ RETURNS SETOF record
+ AS 'MODULE_PATHNAME'
+ LANGUAGE C;
+ 
+ -- Register a view on the function for ease of use.
+ CREATE VIEW pg_stat_statements AS
+   SELECT * FROM pg_stat_statements();
+ 
+ GRANT SELECT ON pg_stat_statements TO PUBLIC;
+ 
+ -- Don't want this to be available to non-superusers.
+ REVOKE ALL ON FUNCTION pg_stat_statements_reset() FROM PUBLIC;
diff --git a/src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql b/src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
index ...26fe0d0 .
*** a/src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
--- b/src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql
***************
*** 0 ****
--- 1,8 ----
+ /* src/extension/pg_stat_statements/pg_stat_statements--unpackaged--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pg_stat_statements" to load this file. \quit
+ 
+ ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements_reset();
+ ALTER EXTENSION pg_stat_statements ADD function pg_stat_statements();
+ ALTER EXTENSION pg_stat_statements ADD view pg_stat_statements;
diff --git a/src/extension/pg_stat_statements/pg_stat_statements.c b/src/extension/pg_stat_statements/pg_stat_statements.c
index ...8d16dd8 .
*** a/src/extension/pg_stat_statements/pg_stat_statements.c
--- b/src/extension/pg_stat_statements/pg_stat_statements.c
***************
*** 0 ****
--- 1,1042 ----
+ /*-------------------------------------------------------------------------
+  *
+  * pg_stat_statements.c
+  *		Track statement execution times across a whole database cluster.
+  *
+  * Note about locking issues: to create or delete an entry in the shared
+  * hashtable, one must hold pgss->lock exclusively.  Modifying any field
+  * in an entry except the counters requires the same.  To look up an entry,
+  * one must hold the lock shared.  To read or update the counters within
+  * an entry, one must hold the lock shared or exclusive (so the entry doesn't
+  * disappear!) and also take the entry's mutex spinlock.
+  *
+  *
+  * Copyright (c) 2008-2011, PostgreSQL Global Development Group
+  *
+  * IDENTIFICATION
+  *	  src/extension/pg_stat_statements/pg_stat_statements.c
+  *
+  *-------------------------------------------------------------------------
+  */
+ #include "postgres.h"
+ 
+ #include <unistd.h>
+ 
+ #include "access/hash.h"
+ #include "executor/instrument.h"
+ #include "funcapi.h"
+ #include "mb/pg_wchar.h"
+ #include "miscadmin.h"
+ #include "pgstat.h"
+ #include "storage/fd.h"
+ #include "storage/ipc.h"
+ #include "storage/spin.h"
+ #include "tcop/utility.h"
+ #include "utils/builtins.h"
+ 
+ 
+ PG_MODULE_MAGIC;
+ 
+ /* Location of stats file */
+ #define PGSS_DUMP_FILE	"global/pg_stat_statements.stat"
+ 
+ /* This constant defines the magic number in the stats file header */
+ static const uint32 PGSS_FILE_HEADER = 0x20100108;
+ 
+ /* XXX: Should USAGE_EXEC reflect execution time and/or buffer usage? */
+ #define USAGE_EXEC(duration)	(1.0)
+ #define USAGE_INIT				(1.0)	/* including initial planning */
+ #define USAGE_DECREASE_FACTOR	(0.99)	/* decreased every entry_dealloc */
+ #define USAGE_DEALLOC_PERCENT	5		/* free this % of entries at once */
+ 
+ /*
+  * Hashtable key that defines the identity of a hashtable entry.  The
+  * hash comparators do not assume that the query string is null-terminated;
+  * this lets us search for an mbcliplen'd string without copying it first.
+  *
+  * Presently, the query encoding is fully determined by the source database
+  * and so we don't really need it to be in the key.  But that might not always
+  * be true. Anyway it's notationally convenient to pass it as part of the key.
+  */
+ typedef struct pgssHashKey
+ {
+ 	Oid			userid;			/* user OID */
+ 	Oid			dbid;			/* database OID */
+ 	int			encoding;		/* query encoding */
+ 	int			query_len;		/* # of valid bytes in query string */
+ 	const char *query_ptr;		/* query string proper */
+ } pgssHashKey;
+ 
+ /*
+  * The actual stats counters kept within pgssEntry.
+  */
+ typedef struct Counters
+ {
+ 	int64		calls;			/* # of times executed */
+ 	double		total_time;		/* total execution time in seconds */
+ 	int64		rows;			/* total # of retrieved or affected rows */
+ 	int64		shared_blks_hit;	/* # of shared buffer hits */
+ 	int64		shared_blks_read;		/* # of shared disk blocks read */
+ 	int64		shared_blks_written;	/* # of shared disk blocks written */
+ 	int64		local_blks_hit; /* # of local buffer hits */
+ 	int64		local_blks_read;	/* # of local disk blocks read */
+ 	int64		local_blks_written;		/* # of local disk blocks written */
+ 	int64		temp_blks_read; /* # of temp blocks read */
+ 	int64		temp_blks_written;		/* # of temp blocks written */
+ 	double		usage;			/* usage factor */
+ } Counters;
+ 
+ /*
+  * Statistics per statement
+  *
+  * NB: see the file read/write code before changing field order here.
+  */
+ typedef struct pgssEntry
+ {
+ 	pgssHashKey key;			/* hash key of entry - MUST BE FIRST */
+ 	Counters	counters;		/* the statistics for this query */
+ 	slock_t		mutex;			/* protects the counters only */
+ 	char		query[1];		/* VARIABLE LENGTH ARRAY - MUST BE LAST */
+ 	/* Note: the allocated length of query[] is actually pgss->query_size */
+ } pgssEntry;
+ 
+ /*
+  * Global shared state
+  */
+ typedef struct pgssSharedState
+ {
+ 	LWLockId	lock;			/* protects hashtable search/modification */
+ 	int			query_size;		/* max query length in bytes */
+ } pgssSharedState;
+ 
+ /*---- Local variables ----*/
+ 
+ /* Current nesting depth of ExecutorRun calls */
+ static int	nested_level = 0;
+ 
+ /* Saved hook values in case of unload */
+ static shmem_startup_hook_type prev_shmem_startup_hook = NULL;
+ static ExecutorStart_hook_type prev_ExecutorStart = NULL;
+ static ExecutorRun_hook_type prev_ExecutorRun = NULL;
+ static ExecutorFinish_hook_type prev_ExecutorFinish = NULL;
+ static ExecutorEnd_hook_type prev_ExecutorEnd = NULL;
+ static ProcessUtility_hook_type prev_ProcessUtility = NULL;
+ 
+ /* Links to shared memory state */
+ static pgssSharedState *pgss = NULL;
+ static HTAB *pgss_hash = NULL;
+ 
+ /*---- GUC variables ----*/
+ 
+ typedef enum
+ {
+ 	PGSS_TRACK_NONE,			/* track no statements */
+ 	PGSS_TRACK_TOP,				/* only top level statements */
+ 	PGSS_TRACK_ALL				/* all statements, including nested ones */
+ }	PGSSTrackLevel;
+ 
+ static const struct config_enum_entry track_options[] =
+ {
+ 	{"none", PGSS_TRACK_NONE, false},
+ 	{"top", PGSS_TRACK_TOP, false},
+ 	{"all", PGSS_TRACK_ALL, false},
+ 	{NULL, 0, false}
+ };
+ 
+ static int	pgss_max;			/* max # statements to track */
+ static int	pgss_track;			/* tracking level */
+ static bool pgss_track_utility; /* whether to track utility commands */
+ static bool pgss_save;			/* whether to save stats across shutdown */
+ 
+ 
+ #define pgss_enabled() \
+ 	(pgss_track == PGSS_TRACK_ALL || \
+ 	(pgss_track == PGSS_TRACK_TOP && nested_level == 0))
+ 
+ /*---- Function declarations ----*/
+ 
+ void		_PG_init(void);
+ void		_PG_fini(void);
+ 
+ Datum		pg_stat_statements_reset(PG_FUNCTION_ARGS);
+ Datum		pg_stat_statements(PG_FUNCTION_ARGS);
+ 
+ PG_FUNCTION_INFO_V1(pg_stat_statements_reset);
+ PG_FUNCTION_INFO_V1(pg_stat_statements);
+ 
+ static void pgss_shmem_startup(void);
+ static void pgss_shmem_shutdown(int code, Datum arg);
+ static void pgss_ExecutorStart(QueryDesc *queryDesc, int eflags);
+ static void pgss_ExecutorRun(QueryDesc *queryDesc,
+ 				 ScanDirection direction,
+ 				 long count);
+ static void pgss_ExecutorFinish(QueryDesc *queryDesc);
+ static void pgss_ExecutorEnd(QueryDesc *queryDesc);
+ static void pgss_ProcessUtility(Node *parsetree,
+ 			  const char *queryString, ParamListInfo params, bool isTopLevel,
+ 					DestReceiver *dest, char *completionTag);
+ static uint32 pgss_hash_fn(const void *key, Size keysize);
+ static int	pgss_match_fn(const void *key1, const void *key2, Size keysize);
+ static void pgss_store(const char *query, double total_time, uint64 rows,
+ 		   const BufferUsage *bufusage);
+ static Size pgss_memsize(void);
+ static pgssEntry *entry_alloc(pgssHashKey *key);
+ static void entry_dealloc(void);
+ static void entry_reset(void);
+ 
+ 
+ /*
+  * Module load callback
+  */
+ void
+ _PG_init(void)
+ {
+ 	/*
+ 	 * In order to create our shared memory area, we have to be loaded via
+ 	 * shared_preload_libraries.  If not, fall out without hooking into any of
+ 	 * the main system.  (We don't throw error here because it seems useful to
+ 	 * allow the pg_stat_statements functions to be created even when the
+ 	 * module isn't active.  The functions must protect themselves against
+ 	 * being called then, however.)
+ 	 */
+ 	if (!process_shared_preload_libraries_in_progress)
+ 		return;
+ 
+ 	/*
+ 	 * Define (or redefine) custom GUC variables.
+ 	 */
+ 	DefineCustomIntVariable("pg_stat_statements.max",
+ 	  "Sets the maximum number of statements tracked by pg_stat_statements.",
+ 							NULL,
+ 							&pgss_max,
+ 							1000,
+ 							100,
+ 							INT_MAX,
+ 							PGC_POSTMASTER,
+ 							0,
+ 							NULL,
+ 							NULL,
+ 							NULL);
+ 
+ 	DefineCustomEnumVariable("pg_stat_statements.track",
+ 			   "Selects which statements are tracked by pg_stat_statements.",
+ 							 NULL,
+ 							 &pgss_track,
+ 							 PGSS_TRACK_TOP,
+ 							 track_options,
+ 							 PGC_SUSET,
+ 							 0,
+ 							 NULL,
+ 							 NULL,
+ 							 NULL);
+ 
+ 	DefineCustomBoolVariable("pg_stat_statements.track_utility",
+ 	   "Selects whether utility commands are tracked by pg_stat_statements.",
+ 							 NULL,
+ 							 &pgss_track_utility,
+ 							 true,
+ 							 PGC_SUSET,
+ 							 0,
+ 							 NULL,
+ 							 NULL,
+ 							 NULL);
+ 
+ 	DefineCustomBoolVariable("pg_stat_statements.save",
+ 			   "Save pg_stat_statements statistics across server shutdowns.",
+ 							 NULL,
+ 							 &pgss_save,
+ 							 true,
+ 							 PGC_SIGHUP,
+ 							 0,
+ 							 NULL,
+ 							 NULL,
+ 							 NULL);
+ 
+ 	EmitWarningsOnPlaceholders("pg_stat_statements");
+ 
+ 	/*
+ 	 * Request additional shared resources.  (These are no-ops if we're not in
+ 	 * the postmaster process.)  We'll allocate or attach to the shared
+ 	 * resources in pgss_shmem_startup().
+ 	 */
+ 	RequestAddinShmemSpace(pgss_memsize());
+ 	RequestAddinLWLocks(1);
+ 
+ 	/*
+ 	 * Install hooks.
+ 	 */
+ 	prev_shmem_startup_hook = shmem_startup_hook;
+ 	shmem_startup_hook = pgss_shmem_startup;
+ 	prev_ExecutorStart = ExecutorStart_hook;
+ 	ExecutorStart_hook = pgss_ExecutorStart;
+ 	prev_ExecutorRun = ExecutorRun_hook;
+ 	ExecutorRun_hook = pgss_ExecutorRun;
+ 	prev_ExecutorFinish = ExecutorFinish_hook;
+ 	ExecutorFinish_hook = pgss_ExecutorFinish;
+ 	prev_ExecutorEnd = ExecutorEnd_hook;
+ 	ExecutorEnd_hook = pgss_ExecutorEnd;
+ 	prev_ProcessUtility = ProcessUtility_hook;
+ 	ProcessUtility_hook = pgss_ProcessUtility;
+ }
+ 
+ /*
+  * Module unload callback
+  */
+ void
+ _PG_fini(void)
+ {
+ 	/* Uninstall hooks. */
+ 	shmem_startup_hook = prev_shmem_startup_hook;
+ 	ExecutorStart_hook = prev_ExecutorStart;
+ 	ExecutorRun_hook = prev_ExecutorRun;
+ 	ExecutorFinish_hook = prev_ExecutorFinish;
+ 	ExecutorEnd_hook = prev_ExecutorEnd;
+ 	ProcessUtility_hook = prev_ProcessUtility;
+ }
+ 
+ /*
+  * shmem_startup hook: allocate or attach to shared memory,
+  * then load any pre-existing statistics from file.
+  */
+ static void
+ pgss_shmem_startup(void)
+ {
+ 	bool		found;
+ 	HASHCTL		info;
+ 	FILE	   *file;
+ 	uint32		header;
+ 	int32		num;
+ 	int32		i;
+ 	int			query_size;
+ 	int			buffer_size;
+ 	char	   *buffer = NULL;
+ 
+ 	if (prev_shmem_startup_hook)
+ 		prev_shmem_startup_hook();
+ 
+ 	/* reset in case this is a restart within the postmaster */
+ 	pgss = NULL;
+ 	pgss_hash = NULL;
+ 
+ 	/*
+ 	 * Create or attach to the shared memory state, including hash table
+ 	 */
+ 	LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
+ 
+ 	pgss = ShmemInitStruct("pg_stat_statements",
+ 						   sizeof(pgssSharedState),
+ 						   &found);
+ 
+ 	if (!found)
+ 	{
+ 		/* First time through ... */
+ 		pgss->lock = LWLockAssign();
+ 		pgss->query_size = pgstat_track_activity_query_size;
+ 	}
+ 
+ 	/* Be sure everyone agrees on the hash table entry size */
+ 	query_size = pgss->query_size;
+ 
+ 	memset(&info, 0, sizeof(info));
+ 	info.keysize = sizeof(pgssHashKey);
+ 	info.entrysize = offsetof(pgssEntry, query) +query_size;
+ 	info.hash = pgss_hash_fn;
+ 	info.match = pgss_match_fn;
+ 	pgss_hash = ShmemInitHash("pg_stat_statements hash",
+ 							  pgss_max, pgss_max,
+ 							  &info,
+ 							  HASH_ELEM | HASH_FUNCTION | HASH_COMPARE);
+ 
+ 	LWLockRelease(AddinShmemInitLock);
+ 
+ 	/*
+ 	 * If we're in the postmaster (or a standalone backend...), set up a shmem
+ 	 * exit hook to dump the statistics to disk.
+ 	 */
+ 	if (!IsUnderPostmaster)
+ 		on_shmem_exit(pgss_shmem_shutdown, (Datum) 0);
+ 
+ 	/*
+ 	 * Attempt to load old statistics from the dump file, if this is the first
+ 	 * time through and we weren't told not to.
+ 	 */
+ 	if (found || !pgss_save)
+ 		return;
+ 
+ 	/*
+ 	 * Note: we don't bother with locks here, because there should be no other
+ 	 * processes running when this code is reached.
+ 	 */
+ 	file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_R);
+ 	if (file == NULL)
+ 	{
+ 		if (errno == ENOENT)
+ 			return;				/* ignore not-found error */
+ 		goto error;
+ 	}
+ 
+ 	buffer_size = query_size;
+ 	buffer = (char *) palloc(buffer_size);
+ 
+ 	if (fread(&header, sizeof(uint32), 1, file) != 1 ||
+ 		header != PGSS_FILE_HEADER ||
+ 		fread(&num, sizeof(int32), 1, file) != 1)
+ 		goto error;
+ 
+ 	for (i = 0; i < num; i++)
+ 	{
+ 		pgssEntry	temp;
+ 		pgssEntry  *entry;
+ 
+ 		if (fread(&temp, offsetof(pgssEntry, mutex), 1, file) != 1)
+ 			goto error;
+ 
+ 		/* Encoding is the only field we can easily sanity-check */
+ 		if (!PG_VALID_BE_ENCODING(temp.key.encoding))
+ 			goto error;
+ 
+ 		/* Previous incarnation might have had a larger query_size */
+ 		if (temp.key.query_len >= buffer_size)
+ 		{
+ 			buffer = (char *) repalloc(buffer, temp.key.query_len + 1);
+ 			buffer_size = temp.key.query_len + 1;
+ 		}
+ 
+ 		if (fread(buffer, 1, temp.key.query_len, file) != temp.key.query_len)
+ 			goto error;
+ 		buffer[temp.key.query_len] = '\0';
+ 
+ 		/* Clip to available length if needed */
+ 		if (temp.key.query_len >= query_size)
+ 			temp.key.query_len = pg_encoding_mbcliplen(temp.key.encoding,
+ 													   buffer,
+ 													   temp.key.query_len,
+ 													   query_size - 1);
+ 		temp.key.query_ptr = buffer;
+ 
+ 		/* make the hashtable entry (discards old entries if too many) */
+ 		entry = entry_alloc(&temp.key);
+ 
+ 		/* copy in the actual stats */
+ 		entry->counters = temp.counters;
+ 	}
+ 
+ 	pfree(buffer);
+ 	FreeFile(file);
+ 	return;
+ 
+ error:
+ 	ereport(LOG,
+ 			(errcode_for_file_access(),
+ 			 errmsg("could not read pg_stat_statement file \"%s\": %m",
+ 					PGSS_DUMP_FILE)));
+ 	if (buffer)
+ 		pfree(buffer);
+ 	if (file)
+ 		FreeFile(file);
+ 	/* If possible, throw away the bogus file; ignore any error */
+ 	unlink(PGSS_DUMP_FILE);
+ }
+ 
+ /*
+  * shmem_shutdown hook: Dump statistics into file.
+  *
+  * Note: we don't bother with acquiring lock, because there should be no
+  * other processes running when this is called.
+  */
+ static void
+ pgss_shmem_shutdown(int code, Datum arg)
+ {
+ 	FILE	   *file;
+ 	HASH_SEQ_STATUS hash_seq;
+ 	int32		num_entries;
+ 	pgssEntry  *entry;
+ 
+ 	/* Don't try to dump during a crash. */
+ 	if (code)
+ 		return;
+ 
+ 	/* Safety check ... shouldn't get here unless shmem is set up. */
+ 	if (!pgss || !pgss_hash)
+ 		return;
+ 
+ 	/* Don't dump if told not to. */
+ 	if (!pgss_save)
+ 		return;
+ 
+ 	file = AllocateFile(PGSS_DUMP_FILE, PG_BINARY_W);
+ 	if (file == NULL)
+ 		goto error;
+ 
+ 	if (fwrite(&PGSS_FILE_HEADER, sizeof(uint32), 1, file) != 1)
+ 		goto error;
+ 	num_entries = hash_get_num_entries(pgss_hash);
+ 	if (fwrite(&num_entries, sizeof(int32), 1, file) != 1)
+ 		goto error;
+ 
+ 	hash_seq_init(&hash_seq, pgss_hash);
+ 	while ((entry = hash_seq_search(&hash_seq)) != NULL)
+ 	{
+ 		int			len = entry->key.query_len;
+ 
+ 		if (fwrite(entry, offsetof(pgssEntry, mutex), 1, file) != 1 ||
+ 			fwrite(entry->query, 1, len, file) != len)
+ 			goto error;
+ 	}
+ 
+ 	if (FreeFile(file))
+ 	{
+ 		file = NULL;
+ 		goto error;
+ 	}
+ 
+ 	return;
+ 
+ error:
+ 	ereport(LOG,
+ 			(errcode_for_file_access(),
+ 			 errmsg("could not write pg_stat_statement file \"%s\": %m",
+ 					PGSS_DUMP_FILE)));
+ 	if (file)
+ 		FreeFile(file);
+ 	unlink(PGSS_DUMP_FILE);
+ }
+ 
+ /*
+  * ExecutorStart hook: start up tracking if needed
+  */
+ static void
+ pgss_ExecutorStart(QueryDesc *queryDesc, int eflags)
+ {
+ 	if (prev_ExecutorStart)
+ 		prev_ExecutorStart(queryDesc, eflags);
+ 	else
+ 		standard_ExecutorStart(queryDesc, eflags);
+ 
+ 	if (pgss_enabled())
+ 	{
+ 		/*
+ 		 * Set up to track total elapsed time in ExecutorRun.  Make sure the
+ 		 * space is allocated in the per-query context so it will go away at
+ 		 * ExecutorEnd.
+ 		 */
+ 		if (queryDesc->totaltime == NULL)
+ 		{
+ 			MemoryContext oldcxt;
+ 
+ 			oldcxt = MemoryContextSwitchTo(queryDesc->estate->es_query_cxt);
+ 			queryDesc->totaltime = InstrAlloc(1, INSTRUMENT_ALL);
+ 			MemoryContextSwitchTo(oldcxt);
+ 		}
+ 	}
+ }
+ 
+ /*
+  * ExecutorRun hook: all we need do is track nesting depth
+  */
+ static void
+ pgss_ExecutorRun(QueryDesc *queryDesc, ScanDirection direction, long count)
+ {
+ 	nested_level++;
+ 	PG_TRY();
+ 	{
+ 		if (prev_ExecutorRun)
+ 			prev_ExecutorRun(queryDesc, direction, count);
+ 		else
+ 			standard_ExecutorRun(queryDesc, direction, count);
+ 		nested_level--;
+ 	}
+ 	PG_CATCH();
+ 	{
+ 		nested_level--;
+ 		PG_RE_THROW();
+ 	}
+ 	PG_END_TRY();
+ }
+ 
+ /*
+  * ExecutorFinish hook: all we need do is track nesting depth
+  */
+ static void
+ pgss_ExecutorFinish(QueryDesc *queryDesc)
+ {
+ 	nested_level++;
+ 	PG_TRY();
+ 	{
+ 		if (prev_ExecutorFinish)
+ 			prev_ExecutorFinish(queryDesc);
+ 		else
+ 			standard_ExecutorFinish(queryDesc);
+ 		nested_level--;
+ 	}
+ 	PG_CATCH();
+ 	{
+ 		nested_level--;
+ 		PG_RE_THROW();
+ 	}
+ 	PG_END_TRY();
+ }
+ 
+ /*
+  * ExecutorEnd hook: store results if needed
+  */
+ static void
+ pgss_ExecutorEnd(QueryDesc *queryDesc)
+ {
+ 	if (queryDesc->totaltime && pgss_enabled())
+ 	{
+ 		/*
+ 		 * Make sure stats accumulation is done.  (Note: it's okay if several
+ 		 * levels of hook all do this.)
+ 		 */
+ 		InstrEndLoop(queryDesc->totaltime);
+ 
+ 		pgss_store(queryDesc->sourceText,
+ 				   queryDesc->totaltime->total,
+ 				   queryDesc->estate->es_processed,
+ 				   &queryDesc->totaltime->bufusage);
+ 	}
+ 
+ 	if (prev_ExecutorEnd)
+ 		prev_ExecutorEnd(queryDesc);
+ 	else
+ 		standard_ExecutorEnd(queryDesc);
+ }
+ 
+ /*
+  * ProcessUtility hook
+  */
+ static void
+ pgss_ProcessUtility(Node *parsetree, const char *queryString,
+ 					ParamListInfo params, bool isTopLevel,
+ 					DestReceiver *dest, char *completionTag)
+ {
+ 	if (pgss_track_utility && pgss_enabled())
+ 	{
+ 		instr_time	start;
+ 		instr_time	duration;
+ 		uint64		rows = 0;
+ 		BufferUsage bufusage;
+ 
+ 		bufusage = pgBufferUsage;
+ 		INSTR_TIME_SET_CURRENT(start);
+ 
+ 		nested_level++;
+ 		PG_TRY();
+ 		{
+ 			if (prev_ProcessUtility)
+ 				prev_ProcessUtility(parsetree, queryString, params,
+ 									isTopLevel, dest, completionTag);
+ 			else
+ 				standard_ProcessUtility(parsetree, queryString, params,
+ 										isTopLevel, dest, completionTag);
+ 			nested_level--;
+ 		}
+ 		PG_CATCH();
+ 		{
+ 			nested_level--;
+ 			PG_RE_THROW();
+ 		}
+ 		PG_END_TRY();
+ 
+ 		INSTR_TIME_SET_CURRENT(duration);
+ 		INSTR_TIME_SUBTRACT(duration, start);
+ 
+ 		/* parse command tag to retrieve the number of affected rows. */
+ 		if (completionTag &&
+ 			sscanf(completionTag, "COPY " UINT64_FORMAT, &rows) != 1)
+ 			rows = 0;
+ 
+ 		/* calc differences of buffer counters. */
+ 		bufusage.shared_blks_hit =
+ 			pgBufferUsage.shared_blks_hit - bufusage.shared_blks_hit;
+ 		bufusage.shared_blks_read =
+ 			pgBufferUsage.shared_blks_read - bufusage.shared_blks_read;
+ 		bufusage.shared_blks_written =
+ 			pgBufferUsage.shared_blks_written - bufusage.shared_blks_written;
+ 		bufusage.local_blks_hit =
+ 			pgBufferUsage.local_blks_hit - bufusage.local_blks_hit;
+ 		bufusage.local_blks_read =
+ 			pgBufferUsage.local_blks_read - bufusage.local_blks_read;
+ 		bufusage.local_blks_written =
+ 			pgBufferUsage.local_blks_written - bufusage.local_blks_written;
+ 		bufusage.temp_blks_read =
+ 			pgBufferUsage.temp_blks_read - bufusage.temp_blks_read;
+ 		bufusage.temp_blks_written =
+ 			pgBufferUsage.temp_blks_written - bufusage.temp_blks_written;
+ 
+ 		pgss_store(queryString, INSTR_TIME_GET_DOUBLE(duration), rows,
+ 				   &bufusage);
+ 	}
+ 	else
+ 	{
+ 		if (prev_ProcessUtility)
+ 			prev_ProcessUtility(parsetree, queryString, params,
+ 								isTopLevel, dest, completionTag);
+ 		else
+ 			standard_ProcessUtility(parsetree, queryString, params,
+ 									isTopLevel, dest, completionTag);
+ 	}
+ }
+ 
+ /*
+  * Calculate hash value for a key
+  */
+ static uint32
+ pgss_hash_fn(const void *key, Size keysize)
+ {
+ 	const pgssHashKey *k = (const pgssHashKey *) key;
+ 
+ 	/* we don't bother to include encoding in the hash */
+ 	return hash_uint32((uint32) k->userid) ^
+ 		hash_uint32((uint32) k->dbid) ^
+ 		DatumGetUInt32(hash_any((const unsigned char *) k->query_ptr,
+ 								k->query_len));
+ }
+ 
+ /*
+  * Compare two keys - zero means match
+  */
+ static int
+ pgss_match_fn(const void *key1, const void *key2, Size keysize)
+ {
+ 	const pgssHashKey *k1 = (const pgssHashKey *) key1;
+ 	const pgssHashKey *k2 = (const pgssHashKey *) key2;
+ 
+ 	if (k1->userid == k2->userid &&
+ 		k1->dbid == k2->dbid &&
+ 		k1->encoding == k2->encoding &&
+ 		k1->query_len == k2->query_len &&
+ 		memcmp(k1->query_ptr, k2->query_ptr, k1->query_len) == 0)
+ 		return 0;
+ 	else
+ 		return 1;
+ }
+ 
+ /*
+  * Store some statistics for a statement.
+  */
+ static void
+ pgss_store(const char *query, double total_time, uint64 rows,
+ 		   const BufferUsage *bufusage)
+ {
+ 	pgssHashKey key;
+ 	double		usage;
+ 	pgssEntry  *entry;
+ 
+ 	Assert(query != NULL);
+ 
+ 	/* Safety check... */
+ 	if (!pgss || !pgss_hash)
+ 		return;
+ 
+ 	/* Set up key for hashtable search */
+ 	key.userid = GetUserId();
+ 	key.dbid = MyDatabaseId;
+ 	key.encoding = GetDatabaseEncoding();
+ 	key.query_len = strlen(query);
+ 	if (key.query_len >= pgss->query_size)
+ 		key.query_len = pg_encoding_mbcliplen(key.encoding,
+ 											  query,
+ 											  key.query_len,
+ 											  pgss->query_size - 1);
+ 	key.query_ptr = query;
+ 
+ 	usage = USAGE_EXEC(duration);
+ 
+ 	/* Lookup the hash table entry with shared lock. */
+ 	LWLockAcquire(pgss->lock, LW_SHARED);
+ 
+ 	entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND, NULL);
+ 	if (!entry)
+ 	{
+ 		/* Must acquire exclusive lock to add a new entry. */
+ 		LWLockRelease(pgss->lock);
+ 		LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
+ 		entry = entry_alloc(&key);
+ 	}
+ 
+ 	/* Grab the spinlock while updating the counters. */
+ 	{
+ 		volatile pgssEntry *e = (volatile pgssEntry *) entry;
+ 
+ 		SpinLockAcquire(&e->mutex);
+ 		e->counters.calls += 1;
+ 		e->counters.total_time += total_time;
+ 		e->counters.rows += rows;
+ 		e->counters.shared_blks_hit += bufusage->shared_blks_hit;
+ 		e->counters.shared_blks_read += bufusage->shared_blks_read;
+ 		e->counters.shared_blks_written += bufusage->shared_blks_written;
+ 		e->counters.local_blks_hit += bufusage->local_blks_hit;
+ 		e->counters.local_blks_read += bufusage->local_blks_read;
+ 		e->counters.local_blks_written += bufusage->local_blks_written;
+ 		e->counters.temp_blks_read += bufusage->temp_blks_read;
+ 		e->counters.temp_blks_written += bufusage->temp_blks_written;
+ 		e->counters.usage += usage;
+ 		SpinLockRelease(&e->mutex);
+ 	}
+ 
+ 	LWLockRelease(pgss->lock);
+ }
+ 
+ /*
+  * Reset all statement statistics.
+  */
+ Datum
+ pg_stat_statements_reset(PG_FUNCTION_ARGS)
+ {
+ 	if (!pgss || !pgss_hash)
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ 				 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
+ 	entry_reset();
+ 	PG_RETURN_VOID();
+ }
+ 
+ #define PG_STAT_STATEMENTS_COLS		14
+ 
+ /*
+  * Retrieve statement statistics.
+  */
+ Datum
+ pg_stat_statements(PG_FUNCTION_ARGS)
+ {
+ 	ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
+ 	TupleDesc	tupdesc;
+ 	Tuplestorestate *tupstore;
+ 	MemoryContext per_query_ctx;
+ 	MemoryContext oldcontext;
+ 	Oid			userid = GetUserId();
+ 	bool		is_superuser = superuser();
+ 	HASH_SEQ_STATUS hash_seq;
+ 	pgssEntry  *entry;
+ 
+ 	if (!pgss || !pgss_hash)
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ 				 errmsg("pg_stat_statements must be loaded via shared_preload_libraries")));
+ 
+ 	/* check to see if caller supports us returning a tuplestore */
+ 	if (rsinfo == NULL || !IsA(rsinfo, ReturnSetInfo))
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ 				 errmsg("set-valued function called in context that cannot accept a set")));
+ 	if (!(rsinfo->allowedModes & SFRM_Materialize))
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ 				 errmsg("materialize mode required, but it is not " \
+ 						"allowed in this context")));
+ 
+ 	/* Build a tuple descriptor for our result type */
+ 	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+ 		elog(ERROR, "return type must be a row type");
+ 
+ 	per_query_ctx = rsinfo->econtext->ecxt_per_query_memory;
+ 	oldcontext = MemoryContextSwitchTo(per_query_ctx);
+ 
+ 	tupstore = tuplestore_begin_heap(true, false, work_mem);
+ 	rsinfo->returnMode = SFRM_Materialize;
+ 	rsinfo->setResult = tupstore;
+ 	rsinfo->setDesc = tupdesc;
+ 
+ 	MemoryContextSwitchTo(oldcontext);
+ 
+ 	LWLockAcquire(pgss->lock, LW_SHARED);
+ 
+ 	hash_seq_init(&hash_seq, pgss_hash);
+ 	while ((entry = hash_seq_search(&hash_seq)) != NULL)
+ 	{
+ 		Datum		values[PG_STAT_STATEMENTS_COLS];
+ 		bool		nulls[PG_STAT_STATEMENTS_COLS];
+ 		int			i = 0;
+ 		Counters	tmp;
+ 
+ 		memset(values, 0, sizeof(values));
+ 		memset(nulls, 0, sizeof(nulls));
+ 
+ 		values[i++] = ObjectIdGetDatum(entry->key.userid);
+ 		values[i++] = ObjectIdGetDatum(entry->key.dbid);
+ 
+ 		if (is_superuser || entry->key.userid == userid)
+ 		{
+ 			char	   *qstr;
+ 
+ 			qstr = (char *)
+ 				pg_do_encoding_conversion((unsigned char *) entry->query,
+ 										  entry->key.query_len,
+ 										  entry->key.encoding,
+ 										  GetDatabaseEncoding());
+ 			values[i++] = CStringGetTextDatum(qstr);
+ 			if (qstr != entry->query)
+ 				pfree(qstr);
+ 		}
+ 		else
+ 			values[i++] = CStringGetTextDatum("<insufficient privilege>");
+ 
+ 		/* copy counters to a local variable to keep locking time short */
+ 		{
+ 			volatile pgssEntry *e = (volatile pgssEntry *) entry;
+ 
+ 			SpinLockAcquire(&e->mutex);
+ 			tmp = e->counters;
+ 			SpinLockRelease(&e->mutex);
+ 		}
+ 
+ 		values[i++] = Int64GetDatumFast(tmp.calls);
+ 		values[i++] = Float8GetDatumFast(tmp.total_time);
+ 		values[i++] = Int64GetDatumFast(tmp.rows);
+ 		values[i++] = Int64GetDatumFast(tmp.shared_blks_hit);
+ 		values[i++] = Int64GetDatumFast(tmp.shared_blks_read);
+ 		values[i++] = Int64GetDatumFast(tmp.shared_blks_written);
+ 		values[i++] = Int64GetDatumFast(tmp.local_blks_hit);
+ 		values[i++] = Int64GetDatumFast(tmp.local_blks_read);
+ 		values[i++] = Int64GetDatumFast(tmp.local_blks_written);
+ 		values[i++] = Int64GetDatumFast(tmp.temp_blks_read);
+ 		values[i++] = Int64GetDatumFast(tmp.temp_blks_written);
+ 
+ 		Assert(i == PG_STAT_STATEMENTS_COLS);
+ 
+ 		tuplestore_putvalues(tupstore, tupdesc, values, nulls);
+ 	}
+ 
+ 	LWLockRelease(pgss->lock);
+ 
+ 	/* clean up and return the tuplestore */
+ 	tuplestore_donestoring(tupstore);
+ 
+ 	return (Datum) 0;
+ }
+ 
+ /*
+  * Estimate shared memory space needed.
+  */
+ static Size
+ pgss_memsize(void)
+ {
+ 	Size		size;
+ 	Size		entrysize;
+ 
+ 	size = MAXALIGN(sizeof(pgssSharedState));
+ 	entrysize = offsetof(pgssEntry, query) +pgstat_track_activity_query_size;
+ 	size = add_size(size, hash_estimate_size(pgss_max, entrysize));
+ 
+ 	return size;
+ }
+ 
+ /*
+  * Allocate a new hashtable entry.
+  * caller must hold an exclusive lock on pgss->lock
+  *
+  * Note: despite needing exclusive lock, it's not an error for the target
+  * entry to already exist.	This is because pgss_store releases and
+  * reacquires lock after failing to find a match; so someone else could
+  * have made the entry while we waited to get exclusive lock.
+  */
+ static pgssEntry *
+ entry_alloc(pgssHashKey *key)
+ {
+ 	pgssEntry  *entry;
+ 	bool		found;
+ 
+ 	/* Caller must have clipped query properly */
+ 	Assert(key->query_len < pgss->query_size);
+ 
+ 	/* Make space if needed */
+ 	while (hash_get_num_entries(pgss_hash) >= pgss_max)
+ 		entry_dealloc();
+ 
+ 	/* Find or create an entry with desired hash code */
+ 	entry = (pgssEntry *) hash_search(pgss_hash, key, HASH_ENTER, &found);
+ 
+ 	if (!found)
+ 	{
+ 		/* New entry, initialize it */
+ 
+ 		/* dynahash tried to copy the key for us, but must fix query_ptr */
+ 		entry->key.query_ptr = entry->query;
+ 		/* reset the statistics */
+ 		memset(&entry->counters, 0, sizeof(Counters));
+ 		entry->counters.usage = USAGE_INIT;
+ 		/* re-initialize the mutex each time ... we assume no one using it */
+ 		SpinLockInit(&entry->mutex);
+ 		/* ... and don't forget the query text */
+ 		memcpy(entry->query, key->query_ptr, key->query_len);
+ 		entry->query[key->query_len] = '\0';
+ 	}
+ 
+ 	return entry;
+ }
+ 
+ /*
+  * qsort comparator for sorting into increasing usage order
+  */
+ static int
+ entry_cmp(const void *lhs, const void *rhs)
+ {
+ 	double		l_usage = (*(pgssEntry * const *) lhs)->counters.usage;
+ 	double		r_usage = (*(pgssEntry * const *) rhs)->counters.usage;
+ 
+ 	if (l_usage < r_usage)
+ 		return -1;
+ 	else if (l_usage > r_usage)
+ 		return +1;
+ 	else
+ 		return 0;
+ }
+ 
+ /*
+  * Deallocate least used entries.
+  * Caller must hold an exclusive lock on pgss->lock.
+  */
+ static void
+ entry_dealloc(void)
+ {
+ 	HASH_SEQ_STATUS hash_seq;
+ 	pgssEntry **entries;
+ 	pgssEntry  *entry;
+ 	int			nvictims;
+ 	int			i;
+ 
+ 	/* Sort entries by usage and deallocate USAGE_DEALLOC_PERCENT of them. */
+ 
+ 	entries = palloc(hash_get_num_entries(pgss_hash) * sizeof(pgssEntry *));
+ 
+ 	i = 0;
+ 	hash_seq_init(&hash_seq, pgss_hash);
+ 	while ((entry = hash_seq_search(&hash_seq)) != NULL)
+ 	{
+ 		entries[i++] = entry;
+ 		entry->counters.usage *= USAGE_DECREASE_FACTOR;
+ 	}
+ 
+ 	qsort(entries, i, sizeof(pgssEntry *), entry_cmp);
+ 	nvictims = Max(10, i * USAGE_DEALLOC_PERCENT / 100);
+ 	nvictims = Min(nvictims, i);
+ 
+ 	for (i = 0; i < nvictims; i++)
+ 	{
+ 		hash_search(pgss_hash, &entries[i]->key, HASH_REMOVE, NULL);
+ 	}
+ 
+ 	pfree(entries);
+ }
+ 
+ /*
+  * Release all entries.
+  */
+ static void
+ entry_reset(void)
+ {
+ 	HASH_SEQ_STATUS hash_seq;
+ 	pgssEntry  *entry;
+ 
+ 	LWLockAcquire(pgss->lock, LW_EXCLUSIVE);
+ 
+ 	hash_seq_init(&hash_seq, pgss_hash);
+ 	while ((entry = hash_seq_search(&hash_seq)) != NULL)
+ 	{
+ 		hash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);
+ 	}
+ 
+ 	LWLockRelease(pgss->lock);
+ }
diff --git a/src/extension/pg_stat_statements/pg_stat_statements.control b/src/extension/pg_stat_statements/pg_stat_statements.control
index ...6f9a947 .
*** a/src/extension/pg_stat_statements/pg_stat_statements.control
--- b/src/extension/pg_stat_statements/pg_stat_statements.control
***************
*** 0 ****
--- 1,5 ----
+ # pg_stat_statements extension
+ comment = 'track execution statistics of all SQL statements executed'
+ default_version = '1.0'
+ module_pathname = '$libdir/pg_stat_statements'
+ relocatable = true
diff --git a/src/extension/pgrowlocks/Makefile b/src/extension/pgrowlocks/Makefile
index ...cc65f2d .
*** a/src/extension/pgrowlocks/Makefile
--- b/src/extension/pgrowlocks/Makefile
***************
*** 0 ****
--- 1,18 ----
+ # src/extension/pgrowlocks/Makefile
+ 
+ MODULE_big	= pgrowlocks
+ OBJS		= pgrowlocks.o
+ 
+ EXTENSION = pgrowlocks
+ DATA = pgrowlocks--1.0.sql pgrowlocks--unpackaged--1.0.sql
+ 
+ ifdef USE_PGXS
+ PG_CONFIG = pg_config
+ PGXS := $(shell $(PG_CONFIG) --pgxs)
+ include $(PGXS)
+ else
+ subdir = src/extension/pgrowlocks
+ top_builddir = ../../..
+ include $(top_builddir)/src/Makefile.global
+ include $(top_srcdir)/src/extension/extension-global.mk
+ endif
diff --git a/src/extension/pgrowlocks/pgrowlocks--1.0.sql b/src/extension/pgrowlocks/pgrowlocks--1.0.sql
index ...59653c4 .
*** a/src/extension/pgrowlocks/pgrowlocks--1.0.sql
--- b/src/extension/pgrowlocks/pgrowlocks--1.0.sql
***************
*** 0 ****
--- 1,15 ----
+ /* src/extension/pgrowlocks/pgrowlocks--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pgrowlocks" to load this file. \quit
+ 
+ CREATE FUNCTION pgrowlocks(IN relname text,
+     OUT locked_row TID,		-- row TID
+     OUT lock_type TEXT,		-- lock type
+     OUT locker XID,		-- locking XID
+     OUT multi bool,		-- multi XID?
+     OUT xids xid[],		-- multi XIDs
+     OUT pids INTEGER[])		-- locker's process id
+ RETURNS SETOF record
+ AS 'MODULE_PATHNAME', 'pgrowlocks'
+ LANGUAGE C STRICT;
diff --git a/src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql b/src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
index ...f91658e .
*** a/src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
--- b/src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql
***************
*** 0 ****
--- 1,6 ----
+ /* src/extension/pgrowlocks/pgrowlocks--unpackaged--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pgrowlocks" to load this file. \quit
+ 
+ ALTER EXTENSION pgrowlocks ADD function pgrowlocks(text);
diff --git a/src/extension/pgrowlocks/pgrowlocks.c b/src/extension/pgrowlocks/pgrowlocks.c
index ...68111e9 .
*** a/src/extension/pgrowlocks/pgrowlocks.c
--- b/src/extension/pgrowlocks/pgrowlocks.c
***************
*** 0 ****
--- 1,220 ----
+ /*
+  * src/extension/pgrowlocks/pgrowlocks.c
+  *
+  * Copyright (c) 2005-2006	Tatsuo Ishii
+  *
+  * Permission to use, copy, modify, and distribute this software and
+  * its documentation for any purpose, without fee, and without a
+  * written agreement is hereby granted, provided that the above
+  * copyright notice and this paragraph and the following two
+  * paragraphs appear in all copies.
+  *
+  * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+  * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+  * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+  * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+  * OF THE POSSIBILITY OF SUCH DAMAGE.
+  *
+  * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+  * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+  * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+  * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+  */
+ 
+ #include "postgres.h"
+ 
+ #include "access/multixact.h"
+ #include "access/relscan.h"
+ #include "access/xact.h"
+ #include "catalog/namespace.h"
+ #include "funcapi.h"
+ #include "miscadmin.h"
+ #include "storage/bufmgr.h"
+ #include "storage/procarray.h"
+ #include "utils/acl.h"
+ #include "utils/builtins.h"
+ #include "utils/rel.h"
+ #include "utils/tqual.h"
+ 
+ 
+ PG_MODULE_MAGIC;
+ 
+ PG_FUNCTION_INFO_V1(pgrowlocks);
+ 
+ extern Datum pgrowlocks(PG_FUNCTION_ARGS);
+ 
+ /* ----------
+  * pgrowlocks:
+  * returns tids of rows being locked
+  * ----------
+  */
+ 
+ #define NCHARS 32
+ 
+ typedef struct
+ {
+ 	Relation	rel;
+ 	HeapScanDesc scan;
+ 	int			ncolumns;
+ } MyData;
+ 
+ Datum
+ pgrowlocks(PG_FUNCTION_ARGS)
+ {
+ 	FuncCallContext *funcctx;
+ 	HeapScanDesc scan;
+ 	HeapTuple	tuple;
+ 	TupleDesc	tupdesc;
+ 	AttInMetadata *attinmeta;
+ 	Datum		result;
+ 	MyData	   *mydata;
+ 	Relation	rel;
+ 
+ 	if (SRF_IS_FIRSTCALL())
+ 	{
+ 		text	   *relname;
+ 		RangeVar   *relrv;
+ 		MemoryContext oldcontext;
+ 		AclResult	aclresult;
+ 
+ 		funcctx = SRF_FIRSTCALL_INIT();
+ 		oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+ 
+ 		/* Build a tuple descriptor for our result type */
+ 		if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+ 			elog(ERROR, "return type must be a row type");
+ 
+ 		attinmeta = TupleDescGetAttInMetadata(tupdesc);
+ 		funcctx->attinmeta = attinmeta;
+ 
+ 		relname = PG_GETARG_TEXT_P(0);
+ 		relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+ 		rel = heap_openrv(relrv, AccessShareLock);
+ 
+ 		/* check permissions: must have SELECT on table */
+ 		aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(),
+ 									  ACL_SELECT);
+ 		if (aclresult != ACLCHECK_OK)
+ 			aclcheck_error(aclresult, ACL_KIND_CLASS,
+ 						   RelationGetRelationName(rel));
+ 
+ 		scan = heap_beginscan(rel, SnapshotNow, 0, NULL);
+ 		mydata = palloc(sizeof(*mydata));
+ 		mydata->rel = rel;
+ 		mydata->scan = scan;
+ 		mydata->ncolumns = tupdesc->natts;
+ 		funcctx->user_fctx = mydata;
+ 
+ 		MemoryContextSwitchTo(oldcontext);
+ 	}
+ 
+ 	funcctx = SRF_PERCALL_SETUP();
+ 	attinmeta = funcctx->attinmeta;
+ 	mydata = (MyData *) funcctx->user_fctx;
+ 	scan = mydata->scan;
+ 
+ 	/* scan the relation */
+ 	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ 	{
+ 		/* must hold a buffer lock to call HeapTupleSatisfiesUpdate */
+ 		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
+ 
+ 		if (HeapTupleSatisfiesUpdate(tuple->t_data,
+ 									 GetCurrentCommandId(false),
+ 									 scan->rs_cbuf) == HeapTupleBeingUpdated)
+ 		{
+ 
+ 			char	  **values;
+ 			int			i;
+ 
+ 			values = (char **) palloc(mydata->ncolumns * sizeof(char *));
+ 
+ 			i = 0;
+ 			values[i++] = (char *) DirectFunctionCall1(tidout, PointerGetDatum(&tuple->t_self));
+ 
+ 			if (tuple->t_data->t_infomask & HEAP_XMAX_SHARED_LOCK)
+ 				values[i++] = pstrdup("Shared");
+ 			else
+ 				values[i++] = pstrdup("Exclusive");
+ 			values[i] = palloc(NCHARS * sizeof(char));
+ 			snprintf(values[i++], NCHARS, "%d", HeapTupleHeaderGetXmax(tuple->t_data));
+ 			if (tuple->t_data->t_infomask & HEAP_XMAX_IS_MULTI)
+ 			{
+ 				TransactionId *xids;
+ 				int			nxids;
+ 				int			j;
+ 				int			isValidXid = 0;		/* any valid xid ever exists? */
+ 
+ 				values[i++] = pstrdup("true");
+ 				nxids = GetMultiXactIdMembers(HeapTupleHeaderGetXmax(tuple->t_data), &xids);
+ 				if (nxids == -1)
+ 				{
+ 					elog(ERROR, "GetMultiXactIdMembers returns error");
+ 				}
+ 
+ 				values[i] = palloc(NCHARS * nxids);
+ 				values[i + 1] = palloc(NCHARS * nxids);
+ 				strcpy(values[i], "{");
+ 				strcpy(values[i + 1], "{");
+ 
+ 				for (j = 0; j < nxids; j++)
+ 				{
+ 					char		buf[NCHARS];
+ 
+ 					if (TransactionIdIsInProgress(xids[j]))
+ 					{
+ 						if (isValidXid)
+ 						{
+ 							strcat(values[i], ",");
+ 							strcat(values[i + 1], ",");
+ 						}
+ 						snprintf(buf, NCHARS, "%d", xids[j]);
+ 						strcat(values[i], buf);
+ 						snprintf(buf, NCHARS, "%d", BackendXidGetPid(xids[j]));
+ 						strcat(values[i + 1], buf);
+ 
+ 						isValidXid = 1;
+ 					}
+ 				}
+ 
+ 				strcat(values[i], "}");
+ 				strcat(values[i + 1], "}");
+ 				i++;
+ 			}
+ 			else
+ 			{
+ 				values[i++] = pstrdup("false");
+ 				values[i] = palloc(NCHARS * sizeof(char));
+ 				snprintf(values[i++], NCHARS, "{%d}", HeapTupleHeaderGetXmax(tuple->t_data));
+ 
+ 				values[i] = palloc(NCHARS * sizeof(char));
+ 				snprintf(values[i++], NCHARS, "{%d}", BackendXidGetPid(HeapTupleHeaderGetXmax(tuple->t_data)));
+ 			}
+ 
+ 			LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
+ 
+ 			/* build a tuple */
+ 			tuple = BuildTupleFromCStrings(attinmeta, values);
+ 
+ 			/* make the tuple into a datum */
+ 			result = HeapTupleGetDatum(tuple);
+ 
+ 			/* Clean up */
+ 			for (i = 0; i < mydata->ncolumns; i++)
+ 				pfree(values[i]);
+ 			pfree(values);
+ 
+ 			SRF_RETURN_NEXT(funcctx, result);
+ 		}
+ 		else
+ 		{
+ 			LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
+ 		}
+ 	}
+ 
+ 	heap_endscan(scan);
+ 	heap_close(mydata->rel, AccessShareLock);
+ 
+ 	SRF_RETURN_DONE(funcctx);
+ }
diff --git a/src/extension/pgrowlocks/pgrowlocks.control b/src/extension/pgrowlocks/pgrowlocks.control
index ...a6ba164 .
*** a/src/extension/pgrowlocks/pgrowlocks.control
--- b/src/extension/pgrowlocks/pgrowlocks.control
***************
*** 0 ****
--- 1,5 ----
+ # pgrowlocks extension
+ comment = 'show row-level locking information'
+ default_version = '1.0'
+ module_pathname = '$libdir/pgrowlocks'
+ relocatable = true
diff --git a/src/extension/pgstattuple/.gitignore b/src/extension/pgstattuple/.gitignore
index ...5dcb3ff .
*** a/src/extension/pgstattuple/.gitignore
--- b/src/extension/pgstattuple/.gitignore
***************
*** 0 ****
--- 1,4 ----
+ # Generated subdirectories
+ /log/
+ /results/
+ /tmp_check/
diff --git a/src/extension/pgstattuple/Makefile b/src/extension/pgstattuple/Makefile
index ...c7144ae .
*** a/src/extension/pgstattuple/Makefile
--- b/src/extension/pgstattuple/Makefile
***************
*** 0 ****
--- 1,20 ----
+ # src/extension/pgstattuple/Makefile
+ 
+ MODULE_big	= pgstattuple
+ OBJS		= pgstattuple.o pgstatindex.o
+ 
+ EXTENSION = pgstattuple
+ DATA = pgstattuple--1.0.sql pgstattuple--unpackaged--1.0.sql
+ 
+ REGRESS = pgstattuple
+ 
+ ifdef USE_PGXS
+ PG_CONFIG = pg_config
+ PGXS := $(shell $(PG_CONFIG) --pgxs)
+ include $(PGXS)
+ else
+ subdir = src/extension/pgstattuple
+ top_builddir = ../../..
+ include $(top_builddir)/src/Makefile.global
+ include $(top_srcdir)/src/extension/extension-global.mk
+ endif
diff --git a/src/extension/pgstattuple/expected/pgstattuple.out b/src/extension/pgstattuple/expected/pgstattuple.out
index ...7f28177 .
*** a/src/extension/pgstattuple/expected/pgstattuple.out
--- b/src/extension/pgstattuple/expected/pgstattuple.out
***************
*** 0 ****
--- 1,38 ----
+ CREATE EXTENSION pgstattuple;
+ --
+ -- It's difficult to come up with platform-independent test cases for
+ -- the pgstattuple functions, but the results for empty tables and
+ -- indexes should be that.
+ --
+ create table test (a int primary key);
+ NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "test_pkey" for table "test"
+ select * from pgstattuple('test'::text);
+  table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent 
+ -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------
+          0 |           0 |         0 |             0 |                0 |              0 |                  0 |          0 |            0
+ (1 row)
+ 
+ select * from pgstattuple('test'::regclass);
+  table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent 
+ -----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------
+          0 |           0 |         0 |             0 |                0 |              0 |                  0 |          0 |            0
+ (1 row)
+ 
+ select * from pgstatindex('test_pkey');
+  version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation 
+ ---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
+        2 |          0 |          0 |             0 |              0 |          0 |           0 |             0 |              NaN |                NaN
+ (1 row)
+ 
+ select pg_relpages('test');
+  pg_relpages 
+ -------------
+            0
+ (1 row)
+ 
+ select pg_relpages('test_pkey');
+  pg_relpages 
+ -------------
+            1
+ (1 row)
+ 
diff --git a/src/extension/pgstattuple/pgstatindex.c b/src/extension/pgstattuple/pgstatindex.c
index ...8082f91 .
*** a/src/extension/pgstattuple/pgstatindex.c
--- b/src/extension/pgstattuple/pgstatindex.c
***************
*** 0 ****
--- 1,293 ----
+ /*
+  * src/extension/pgstattuple/pgstatindex.c
+  *
+  *
+  * pgstatindex
+  *
+  * Copyright (c) 2006 Satoshi Nagayasu <nagayasus@nttdata.co.jp>
+  *
+  * Permission to use, copy, modify, and distribute this software and
+  * its documentation for any purpose, without fee, and without a
+  * written agreement is hereby granted, provided that the above
+  * copyright notice and this paragraph and the following two
+  * paragraphs appear in all copies.
+  *
+  * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+  * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+  * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+  * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+  * OF THE POSSIBILITY OF SUCH DAMAGE.
+  *
+  * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+  * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+  * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+  * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+  */
+ 
+ #include "postgres.h"
+ 
+ #include "access/heapam.h"
+ #include "access/nbtree.h"
+ #include "catalog/namespace.h"
+ #include "funcapi.h"
+ #include "miscadmin.h"
+ #include "storage/bufmgr.h"
+ #include "utils/builtins.h"
+ #include "utils/rel.h"
+ 
+ 
+ extern Datum pgstatindex(PG_FUNCTION_ARGS);
+ extern Datum pg_relpages(PG_FUNCTION_ARGS);
+ 
+ PG_FUNCTION_INFO_V1(pgstatindex);
+ PG_FUNCTION_INFO_V1(pg_relpages);
+ 
+ #define IS_INDEX(r) ((r)->rd_rel->relkind == RELKIND_INDEX)
+ #define IS_BTREE(r) ((r)->rd_rel->relam == BTREE_AM_OID)
+ 
+ #define CHECK_PAGE_OFFSET_RANGE(pg, offnum) { \
+ 		if ( !(FirstOffsetNumber <= (offnum) && \
+ 						(offnum) <= PageGetMaxOffsetNumber(pg)) ) \
+ 			 elog(ERROR, "page offset number out of range"); }
+ 
+ /* note: BlockNumber is unsigned, hence can't be negative */
+ #define CHECK_RELATION_BLOCK_RANGE(rel, blkno) { \
+ 		if ( RelationGetNumberOfBlocks(rel) <= (BlockNumber) (blkno) ) \
+ 			 elog(ERROR, "block number out of range"); }
+ 
+ /* ------------------------------------------------
+  * A structure for a whole btree index statistics
+  * used by pgstatindex().
+  * ------------------------------------------------
+  */
+ typedef struct BTIndexStat
+ {
+ 	uint32		version;
+ 	uint32		level;
+ 	BlockNumber root_blkno;
+ 
+ 	uint64		root_pages;
+ 	uint64		internal_pages;
+ 	uint64		leaf_pages;
+ 	uint64		empty_pages;
+ 	uint64		deleted_pages;
+ 
+ 	uint64		max_avail;
+ 	uint64		free_space;
+ 
+ 	uint64		fragments;
+ } BTIndexStat;
+ 
+ /* ------------------------------------------------------
+  * pgstatindex()
+  *
+  * Usage: SELECT * FROM pgstatindex('t1_pkey');
+  * ------------------------------------------------------
+  */
+ Datum
+ pgstatindex(PG_FUNCTION_ARGS)
+ {
+ 	text	   *relname = PG_GETARG_TEXT_P(0);
+ 	Relation	rel;
+ 	RangeVar   *relrv;
+ 	Datum		result;
+ 	BlockNumber nblocks;
+ 	BlockNumber blkno;
+ 	BTIndexStat indexStat;
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ 				 (errmsg("must be superuser to use pgstattuple functions"))));
+ 
+ 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+ 	rel = relation_openrv(relrv, AccessShareLock);
+ 
+ 	if (!IS_INDEX(rel) || !IS_BTREE(rel))
+ 		elog(ERROR, "relation \"%s\" is not a btree index",
+ 			 RelationGetRelationName(rel));
+ 
+ 	/*
+ 	 * Reject attempts to read non-local temporary relations; we would be
+ 	 * likely to get wrong data since we have no visibility into the owning
+ 	 * session's local buffers.
+ 	 */
+ 	if (RELATION_IS_OTHER_TEMP(rel))
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ 				 errmsg("cannot access temporary tables of other sessions")));
+ 
+ 	/*
+ 	 * Read metapage
+ 	 */
+ 	{
+ 		Buffer		buffer = ReadBuffer(rel, 0);
+ 		Page		page = BufferGetPage(buffer);
+ 		BTMetaPageData *metad = BTPageGetMeta(page);
+ 
+ 		indexStat.version = metad->btm_version;
+ 		indexStat.level = metad->btm_level;
+ 		indexStat.root_blkno = metad->btm_root;
+ 
+ 		ReleaseBuffer(buffer);
+ 	}
+ 
+ 	/* -- init counters -- */
+ 	indexStat.root_pages = 0;
+ 	indexStat.internal_pages = 0;
+ 	indexStat.leaf_pages = 0;
+ 	indexStat.empty_pages = 0;
+ 	indexStat.deleted_pages = 0;
+ 
+ 	indexStat.max_avail = 0;
+ 	indexStat.free_space = 0;
+ 
+ 	indexStat.fragments = 0;
+ 
+ 	/*
+ 	 * Scan all blocks except the metapage
+ 	 */
+ 	nblocks = RelationGetNumberOfBlocks(rel);
+ 
+ 	for (blkno = 1; blkno < nblocks; blkno++)
+ 	{
+ 		Buffer		buffer;
+ 		Page		page;
+ 		BTPageOpaque opaque;
+ 
+ 		CHECK_FOR_INTERRUPTS();
+ 
+ 		/* Read and lock buffer */
+ 		buffer = ReadBuffer(rel, blkno);
+ 		LockBuffer(buffer, BUFFER_LOCK_SHARE);
+ 
+ 		page = BufferGetPage(buffer);
+ 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
+ 
+ 		/* Determine page type, and update totals */
+ 
+ 		if (P_ISLEAF(opaque))
+ 		{
+ 			int			max_avail;
+ 
+ 			max_avail = BLCKSZ - (BLCKSZ - ((PageHeader) page)->pd_special + SizeOfPageHeaderData);
+ 			indexStat.max_avail += max_avail;
+ 			indexStat.free_space += PageGetFreeSpace(page);
+ 
+ 			indexStat.leaf_pages++;
+ 
+ 			/*
+ 			 * If the next leaf is on an earlier block, it means a
+ 			 * fragmentation.
+ 			 */
+ 			if (opaque->btpo_next != P_NONE && opaque->btpo_next < blkno)
+ 				indexStat.fragments++;
+ 		}
+ 		else if (P_ISDELETED(opaque))
+ 			indexStat.deleted_pages++;
+ 		else if (P_IGNORE(opaque))
+ 			indexStat.empty_pages++;
+ 		else if (P_ISROOT(opaque))
+ 			indexStat.root_pages++;
+ 		else
+ 			indexStat.internal_pages++;
+ 
+ 		/* Unlock and release buffer */
+ 		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+ 		ReleaseBuffer(buffer);
+ 	}
+ 
+ 	relation_close(rel, AccessShareLock);
+ 
+ 	/*----------------------------
+ 	 * Build a result tuple
+ 	 *----------------------------
+ 	 */
+ 	{
+ 		TupleDesc	tupleDesc;
+ 		int			j;
+ 		char	   *values[10];
+ 		HeapTuple	tuple;
+ 
+ 		/* Build a tuple descriptor for our result type */
+ 		if (get_call_result_type(fcinfo, NULL, &tupleDesc) != TYPEFUNC_COMPOSITE)
+ 			elog(ERROR, "return type must be a row type");
+ 
+ 		j = 0;
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, "%d", indexStat.version);
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, "%d", indexStat.level);
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, INT64_FORMAT,
+ 				 (indexStat.root_pages +
+ 				  indexStat.leaf_pages +
+ 				  indexStat.internal_pages +
+ 				  indexStat.deleted_pages +
+ 				  indexStat.empty_pages) * BLCKSZ);
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, "%u", indexStat.root_blkno);
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, INT64_FORMAT, indexStat.internal_pages);
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, INT64_FORMAT, indexStat.leaf_pages);
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, INT64_FORMAT, indexStat.empty_pages);
+ 		values[j] = palloc(32);
+ 		snprintf(values[j++], 32, INT64_FORMAT, indexStat.deleted_pages);
+ 		values[j] = palloc(32);
+ 		if (indexStat.max_avail > 0)
+ 			snprintf(values[j++], 32, "%.2f",
+ 					 100.0 - (double) indexStat.free_space / (double) indexStat.max_avail * 100.0);
+ 		else
+ 			snprintf(values[j++], 32, "NaN");
+ 		values[j] = palloc(32);
+ 		if (indexStat.leaf_pages > 0)
+ 			snprintf(values[j++], 32, "%.2f",
+ 					 (double) indexStat.fragments / (double) indexStat.leaf_pages * 100.0);
+ 		else
+ 			snprintf(values[j++], 32, "NaN");
+ 
+ 		tuple = BuildTupleFromCStrings(TupleDescGetAttInMetadata(tupleDesc),
+ 									   values);
+ 
+ 		result = HeapTupleGetDatum(tuple);
+ 	}
+ 
+ 	PG_RETURN_DATUM(result);
+ }
+ 
+ /* --------------------------------------------------------
+  * pg_relpages()
+  *
+  * Get the number of pages of the table/index.
+  *
+  * Usage: SELECT pg_relpages('t1');
+  *		  SELECT pg_relpages('t1_pkey');
+  * --------------------------------------------------------
+  */
+ Datum
+ pg_relpages(PG_FUNCTION_ARGS)
+ {
+ 	text	   *relname = PG_GETARG_TEXT_P(0);
+ 	int64		relpages;
+ 	Relation	rel;
+ 	RangeVar   *relrv;
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ 				 (errmsg("must be superuser to use pgstattuple functions"))));
+ 
+ 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+ 	rel = relation_openrv(relrv, AccessShareLock);
+ 
+ 	/* note: this will work OK on non-local temp tables */
+ 
+ 	relpages = RelationGetNumberOfBlocks(rel);
+ 
+ 	relation_close(rel, AccessShareLock);
+ 
+ 	PG_RETURN_INT64(relpages);
+ }
diff --git a/src/extension/pgstattuple/pgstattuple--1.0.sql b/src/extension/pgstattuple/pgstattuple--1.0.sql
index ...8781ef3 .
*** a/src/extension/pgstattuple/pgstattuple--1.0.sql
--- b/src/extension/pgstattuple/pgstattuple--1.0.sql
***************
*** 0 ****
--- 1,49 ----
+ /* src/extension/pgstattuple/pgstattuple--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pgstattuple" to load this file. \quit
+ 
+ CREATE FUNCTION pgstattuple(IN relname text,
+     OUT table_len BIGINT,		-- physical table length in bytes
+     OUT tuple_count BIGINT,		-- number of live tuples
+     OUT tuple_len BIGINT,		-- total tuples length in bytes
+     OUT tuple_percent FLOAT8,		-- live tuples in %
+     OUT dead_tuple_count BIGINT,	-- number of dead tuples
+     OUT dead_tuple_len BIGINT,		-- total dead tuples length in bytes
+     OUT dead_tuple_percent FLOAT8,	-- dead tuples in %
+     OUT free_space BIGINT,		-- free space in bytes
+     OUT free_percent FLOAT8)		-- free space in %
+ AS 'MODULE_PATHNAME', 'pgstattuple'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION pgstattuple(IN reloid oid,
+     OUT table_len BIGINT,		-- physical table length in bytes
+     OUT tuple_count BIGINT,		-- number of live tuples
+     OUT tuple_len BIGINT,		-- total tuples length in bytes
+     OUT tuple_percent FLOAT8,		-- live tuples in %
+     OUT dead_tuple_count BIGINT,	-- number of dead tuples
+     OUT dead_tuple_len BIGINT,		-- total dead tuples length in bytes
+     OUT dead_tuple_percent FLOAT8,	-- dead tuples in %
+     OUT free_space BIGINT,		-- free space in bytes
+     OUT free_percent FLOAT8)		-- free space in %
+ AS 'MODULE_PATHNAME', 'pgstattuplebyid'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION pgstatindex(IN relname text,
+     OUT version INT,
+     OUT tree_level INT,
+     OUT index_size BIGINT,
+     OUT root_block_no BIGINT,
+     OUT internal_pages BIGINT,
+     OUT leaf_pages BIGINT,
+     OUT empty_pages BIGINT,
+     OUT deleted_pages BIGINT,
+     OUT avg_leaf_density FLOAT8,
+     OUT leaf_fragmentation FLOAT8)
+ AS 'MODULE_PATHNAME', 'pgstatindex'
+ LANGUAGE C STRICT;
+ 
+ CREATE FUNCTION pg_relpages(IN relname text)
+ RETURNS BIGINT
+ AS 'MODULE_PATHNAME', 'pg_relpages'
+ LANGUAGE C STRICT;
diff --git a/src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql b/src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql
index ...3e226dc .
*** a/src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql
--- b/src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql
***************
*** 0 ****
--- 1,9 ----
+ /* src/extension/pgstattuple/pgstattuple--unpackaged--1.0.sql */
+ 
+ -- complain if script is sourced in psql, rather than via CREATE EXTENSION
+ \echo Use "CREATE EXTENSION pgstattuple" to load this file. \quit
+ 
+ ALTER EXTENSION pgstattuple ADD function pgstattuple(text);
+ ALTER EXTENSION pgstattuple ADD function pgstattuple(oid);
+ ALTER EXTENSION pgstattuple ADD function pgstatindex(text);
+ ALTER EXTENSION pgstattuple ADD function pg_relpages(text);
diff --git a/src/extension/pgstattuple/pgstattuple.c b/src/extension/pgstattuple/pgstattuple.c
index ...76357ee .
*** a/src/extension/pgstattuple/pgstattuple.c
--- b/src/extension/pgstattuple/pgstattuple.c
***************
*** 0 ****
--- 1,518 ----
+ /*
+  * src/extension/pgstattuple/pgstattuple.c
+  *
+  * Copyright (c) 2001,2002	Tatsuo Ishii
+  *
+  * Permission to use, copy, modify, and distribute this software and
+  * its documentation for any purpose, without fee, and without a
+  * written agreement is hereby granted, provided that the above
+  * copyright notice and this paragraph and the following two
+  * paragraphs appear in all copies.
+  *
+  * IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT,
+  * INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING
+  * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS
+  * DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED
+  * OF THE POSSIBILITY OF SUCH DAMAGE.
+  *
+  * THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
+  * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+  * A PARTICULAR PURPOSE.  THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS
+  * IS" BASIS, AND THE AUTHOR HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE,
+  * SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.
+  */
+ 
+ #include "postgres.h"
+ 
+ #include "access/gist_private.h"
+ #include "access/hash.h"
+ #include "access/nbtree.h"
+ #include "access/relscan.h"
+ #include "catalog/namespace.h"
+ #include "funcapi.h"
+ #include "miscadmin.h"
+ #include "storage/bufmgr.h"
+ #include "storage/lmgr.h"
+ #include "utils/builtins.h"
+ #include "utils/tqual.h"
+ 
+ 
+ PG_MODULE_MAGIC;
+ 
+ PG_FUNCTION_INFO_V1(pgstattuple);
+ PG_FUNCTION_INFO_V1(pgstattuplebyid);
+ 
+ extern Datum pgstattuple(PG_FUNCTION_ARGS);
+ extern Datum pgstattuplebyid(PG_FUNCTION_ARGS);
+ 
+ /*
+  * struct pgstattuple_type
+  *
+  * tuple_percent, dead_tuple_percent and free_percent are computable,
+  * so not defined here.
+  */
+ typedef struct pgstattuple_type
+ {
+ 	uint64		table_len;
+ 	uint64		tuple_count;
+ 	uint64		tuple_len;
+ 	uint64		dead_tuple_count;
+ 	uint64		dead_tuple_len;
+ 	uint64		free_space;		/* free/reusable space in bytes */
+ } pgstattuple_type;
+ 
+ typedef void (*pgstat_page) (pgstattuple_type *, Relation, BlockNumber);
+ 
+ static Datum build_pgstattuple_type(pgstattuple_type *stat,
+ 					   FunctionCallInfo fcinfo);
+ static Datum pgstat_relation(Relation rel, FunctionCallInfo fcinfo);
+ static Datum pgstat_heap(Relation rel, FunctionCallInfo fcinfo);
+ static void pgstat_btree_page(pgstattuple_type *stat,
+ 				  Relation rel, BlockNumber blkno);
+ static void pgstat_hash_page(pgstattuple_type *stat,
+ 				 Relation rel, BlockNumber blkno);
+ static void pgstat_gist_page(pgstattuple_type *stat,
+ 				 Relation rel, BlockNumber blkno);
+ static Datum pgstat_index(Relation rel, BlockNumber start,
+ 			 pgstat_page pagefn, FunctionCallInfo fcinfo);
+ static void pgstat_index_page(pgstattuple_type *stat, Page page,
+ 				  OffsetNumber minoff, OffsetNumber maxoff);
+ 
+ /*
+  * build_pgstattuple_type -- build a pgstattuple_type tuple
+  */
+ static Datum
+ build_pgstattuple_type(pgstattuple_type *stat, FunctionCallInfo fcinfo)
+ {
+ #define NCOLUMNS	9
+ #define NCHARS		32
+ 
+ 	HeapTuple	tuple;
+ 	char	   *values[NCOLUMNS];
+ 	char		values_buf[NCOLUMNS][NCHARS];
+ 	int			i;
+ 	double		tuple_percent;
+ 	double		dead_tuple_percent;
+ 	double		free_percent;	/* free/reusable space in % */
+ 	TupleDesc	tupdesc;
+ 	AttInMetadata *attinmeta;
+ 
+ 	/* Build a tuple descriptor for our result type */
+ 	if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE)
+ 		elog(ERROR, "return type must be a row type");
+ 
+ 	/*
+ 	 * Generate attribute metadata needed later to produce tuples from raw C
+ 	 * strings
+ 	 */
+ 	attinmeta = TupleDescGetAttInMetadata(tupdesc);
+ 
+ 	if (stat->table_len == 0)
+ 	{
+ 		tuple_percent = 0.0;
+ 		dead_tuple_percent = 0.0;
+ 		free_percent = 0.0;
+ 	}
+ 	else
+ 	{
+ 		tuple_percent = 100.0 * stat->tuple_len / stat->table_len;
+ 		dead_tuple_percent = 100.0 * stat->dead_tuple_len / stat->table_len;
+ 		free_percent = 100.0 * stat->free_space / stat->table_len;
+ 	}
+ 
+ 	/*
+ 	 * Prepare a values array for constructing the tuple. This should be an
+ 	 * array of C strings which will be processed later by the appropriate
+ 	 * "in" functions.
+ 	 */
+ 	for (i = 0; i < NCOLUMNS; i++)
+ 		values[i] = values_buf[i];
+ 	i = 0;
+ 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->table_len);
+ 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_count);
+ 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->tuple_len);
+ 	snprintf(values[i++], NCHARS, "%.2f", tuple_percent);
+ 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_count);
+ 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->dead_tuple_len);
+ 	snprintf(values[i++], NCHARS, "%.2f", dead_tuple_percent);
+ 	snprintf(values[i++], NCHARS, INT64_FORMAT, stat->free_space);
+ 	snprintf(values[i++], NCHARS, "%.2f", free_percent);
+ 
+ 	/* build a tuple */
+ 	tuple = BuildTupleFromCStrings(attinmeta, values);
+ 
+ 	/* make the tuple into a datum */
+ 	return HeapTupleGetDatum(tuple);
+ }
+ 
+ /* ----------
+  * pgstattuple:
+  * returns live/dead tuples info
+  *
+  * C FUNCTION definition
+  * pgstattuple(text) returns pgstattuple_type
+  * see pgstattuple.sql for pgstattuple_type
+  * ----------
+  */
+ 
+ Datum
+ pgstattuple(PG_FUNCTION_ARGS)
+ {
+ 	text	   *relname = PG_GETARG_TEXT_P(0);
+ 	RangeVar   *relrv;
+ 	Relation	rel;
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ 				 (errmsg("must be superuser to use pgstattuple functions"))));
+ 
+ 	/* open relation */
+ 	relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));
+ 	rel = relation_openrv(relrv, AccessShareLock);
+ 
+ 	PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
+ }
+ 
+ Datum
+ pgstattuplebyid(PG_FUNCTION_ARGS)
+ {
+ 	Oid			relid = PG_GETARG_OID(0);
+ 	Relation	rel;
+ 
+ 	if (!superuser())
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
+ 				 (errmsg("must be superuser to use pgstattuple functions"))));
+ 
+ 	/* open relation */
+ 	rel = relation_open(relid, AccessShareLock);
+ 
+ 	PG_RETURN_DATUM(pgstat_relation(rel, fcinfo));
+ }
+ 
+ /*
+  * pgstat_relation
+  */
+ static Datum
+ pgstat_relation(Relation rel, FunctionCallInfo fcinfo)
+ {
+ 	const char *err;
+ 
+ 	/*
+ 	 * Reject attempts to read non-local temporary relations; we would be
+ 	 * likely to get wrong data since we have no visibility into the owning
+ 	 * session's local buffers.
+ 	 */
+ 	if (RELATION_IS_OTHER_TEMP(rel))
+ 		ereport(ERROR,
+ 				(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ 				 errmsg("cannot access temporary tables of other sessions")));
+ 
+ 	switch (rel->rd_rel->relkind)
+ 	{
+ 		case RELKIND_RELATION:
+ 		case RELKIND_TOASTVALUE:
+ 		case RELKIND_UNCATALOGED:
+ 		case RELKIND_SEQUENCE:
+ 			return pgstat_heap(rel, fcinfo);
+ 		case RELKIND_INDEX:
+ 			switch (rel->rd_rel->relam)
+ 			{
+ 				case BTREE_AM_OID:
+ 					return pgstat_index(rel, BTREE_METAPAGE + 1,
+ 										pgstat_btree_page, fcinfo);
+ 				case HASH_AM_OID:
+ 					return pgstat_index(rel, HASH_METAPAGE + 1,
+ 										pgstat_hash_page, fcinfo);
+ 				case GIST_AM_OID:
+ 					return pgstat_index(rel, GIST_ROOT_BLKNO + 1,
+ 										pgstat_gist_page, fcinfo);
+ 				case GIN_AM_OID:
+ 					err = "gin index";
+ 					break;
+ 				default:
+ 					err = "unknown index";
+ 					break;
+ 			}
+ 			break;
+ 		case RELKIND_VIEW:
+ 			err = "view";
+ 			break;
+ 		case RELKIND_COMPOSITE_TYPE:
+ 			err = "composite type";
+ 			break;
+ 		case RELKIND_FOREIGN_TABLE:
+ 			err = "foreign table";
+ 			break;
+ 		default:
+ 			err = "unknown";
+ 			break;
+ 	}
+ 
+ 	ereport(ERROR,
+ 			(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+ 			 errmsg("\"%s\" (%s) is not supported",
+ 					RelationGetRelationName(rel), err)));
+ 	return 0;					/* should not happen */
+ }
+ 
+ /*
+  * pgstat_heap -- returns live/dead tuples info in a heap
+  */
+ static Datum
+ pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
+ {
+ 	HeapScanDesc scan;
+ 	HeapTuple	tuple;
+ 	BlockNumber nblocks;
+ 	BlockNumber block = 0;		/* next block to count free space in */
+ 	BlockNumber tupblock;
+ 	Buffer		buffer;
+ 	pgstattuple_type stat = {0};
+ 
+ 	/* Disable syncscan because we assume we scan from block zero upwards */
+ 	scan = heap_beginscan_strat(rel, SnapshotAny, 0, NULL, true, false);
+ 
+ 	nblocks = scan->rs_nblocks; /* # blocks to be scanned */
+ 
+ 	/* scan the relation */
+ 	while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
+ 	{
+ 		CHECK_FOR_INTERRUPTS();
+ 
+ 		/* must hold a buffer lock to call HeapTupleSatisfiesVisibility */
+ 		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_SHARE);
+ 
+ 		if (HeapTupleSatisfiesVisibility(tuple, SnapshotNow, scan->rs_cbuf))
+ 		{
+ 			stat.tuple_len += tuple->t_len;
+ 			stat.tuple_count++;
+ 		}
+ 		else
+ 		{
+ 			stat.dead_tuple_len += tuple->t_len;
+ 			stat.dead_tuple_count++;
+ 		}
+ 
+ 		LockBuffer(scan->rs_cbuf, BUFFER_LOCK_UNLOCK);
+ 
+ 		/*
+ 		 * To avoid physically reading the table twice, try to do the
+ 		 * free-space scan in parallel with the heap scan.	However,
+ 		 * heap_getnext may find no tuples on a given page, so we cannot
+ 		 * simply examine the pages returned by the heap scan.
+ 		 */
+ 		tupblock = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);
+ 
+ 		while (block <= tupblock)
+ 		{
+ 			CHECK_FOR_INTERRUPTS();
+ 
+ 			buffer = ReadBuffer(rel, block);
+ 			LockBuffer(buffer, BUFFER_LOCK_SHARE);
+ 			stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
+ 			UnlockReleaseBuffer(buffer);
+ 			block++;
+ 		}
+ 	}
+ 	heap_endscan(scan);
+ 
+ 	while (block < nblocks)
+ 	{
+ 		CHECK_FOR_INTERRUPTS();
+ 
+ 		buffer = ReadBuffer(rel, block);
+ 		LockBuffer(buffer, BUFFER_LOCK_SHARE);
+ 		stat.free_space += PageGetHeapFreeSpace((Page) BufferGetPage(buffer));
+ 		UnlockReleaseBuffer(buffer);
+ 		block++;
+ 	}
+ 
+ 	relation_close(rel, AccessShareLock);
+ 
+ 	stat.table_len = (uint64) nblocks *BLCKSZ;
+ 
+ 	return build_pgstattuple_type(&stat, fcinfo);
+ }
+ 
+ /*
+  * pgstat_btree_page -- check tuples in a btree page
+  */
+ static void
+ pgstat_btree_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
+ {
+ 	Buffer		buf;
+ 	Page		page;
+ 
+ 	buf = ReadBuffer(rel, blkno);
+ 	LockBuffer(buf, BT_READ);
+ 	page = BufferGetPage(buf);
+ 
+ 	/* Page is valid, see what to do with it */
+ 	if (PageIsNew(page))
+ 	{
+ 		/* fully empty page */
+ 		stat->free_space += BLCKSZ;
+ 	}
+ 	else
+ 	{
+ 		BTPageOpaque opaque;
+ 
+ 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
+ 		if (opaque->btpo_flags & (BTP_DELETED | BTP_HALF_DEAD))
+ 		{
+ 			/* recyclable page */
+ 			stat->free_space += BLCKSZ;
+ 		}
+ 		else if (P_ISLEAF(opaque))
+ 		{
+ 			pgstat_index_page(stat, page, P_FIRSTDATAKEY(opaque),
+ 							  PageGetMaxOffsetNumber(page));
+ 		}
+ 		else
+ 		{
+ 			/* root or node */
+ 		}
+ 	}
+ 
+ 	_bt_relbuf(rel, buf);
+ }
+ 
+ /*
+  * pgstat_hash_page -- check tuples in a hash page
+  */
+ static void
+ pgstat_hash_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
+ {
+ 	Buffer		buf;
+ 	Page		page;
+ 
+ 	_hash_getlock(rel, blkno, HASH_SHARE);
+ 	buf = _hash_getbuf(rel, blkno, HASH_READ, 0);
+ 	page = BufferGetPage(buf);
+ 
+ 	if (PageGetSpecialSize(page) == MAXALIGN(sizeof(HashPageOpaqueData)))
+ 	{
+ 		HashPageOpaque opaque;
+ 
+ 		opaque = (HashPageOpaque) PageGetSpecialPointer(page);
+ 		switch (opaque->hasho_flag)
+ 		{
+ 			case LH_UNUSED_PAGE:
+ 				stat->free_space += BLCKSZ;
+ 				break;
+ 			case LH_BUCKET_PAGE:
+ 			case LH_OVERFLOW_PAGE:
+ 				pgstat_index_page(stat, page, FirstOffsetNumber,
+ 								  PageGetMaxOffsetNumber(page));
+ 				break;
+ 			case LH_BITMAP_PAGE:
+ 			case LH_META_PAGE:
+ 			default:
+ 				break;
+ 		}
+ 	}
+ 	else
+ 	{
+ 		/* maybe corrupted */
+ 	}
+ 
+ 	_hash_relbuf(rel, buf);
+ 	_hash_droplock(rel, blkno, HASH_SHARE);
+ }
+ 
+ /*
+  * pgstat_gist_page -- check tuples in a gist page
+  */
+ static void
+ pgstat_gist_page(pgstattuple_type *stat, Relation rel, BlockNumber blkno)
+ {
+ 	Buffer		buf;
+ 	Page		page;
+ 
+ 	buf = ReadBuffer(rel, blkno);
+ 	LockBuffer(buf, GIST_SHARE);
+ 	gistcheckpage(rel, buf);
+ 	page = BufferGetPage(buf);
+ 
+ 	if (GistPageIsLeaf(page))
+ 	{
+ 		pgstat_index_page(stat, page, FirstOffsetNumber,
+ 						  PageGetMaxOffsetNumber(page));
+ 	}
+ 	else
+ 	{
+ 		/* root or node */
+ 	}
+ 
+ 	UnlockReleaseBuffer(buf);
+ }
+ 
+ /*
+  * pgstat_index -- returns live/dead tuples info in a generic index
+  */
+ static Datum
+ pgstat_index(Relation rel, BlockNumber start, pgstat_page pagefn,
+ 			 FunctionCallInfo fcinfo)
+ {
+ 	BlockNumber nblocks;
+ 	BlockNumber blkno;
+ 	pgstattuple_type stat = {0};
+ 
+ 	blkno = start;
+ 	for (;;)
+ 	{
+ 		/* Get the current relation length */
+ 		LockRelationForExtension(rel, ExclusiveLock);
+ 		nblocks = RelationGetNumberOfBlocks(rel);
+ 		UnlockRelationForExtension(rel, ExclusiveLock);
+ 
+ 		/* Quit if we've scanned the whole relation */
+ 		if (blkno >= nblocks)
+ 		{
+ 			stat.table_len = (uint64) nblocks *BLCKSZ;
+ 
+ 			break;
+ 		}
+ 
+ 		for (; blkno < nblocks; blkno++)
+ 		{
+ 			CHECK_FOR_INTERRUPTS();
+ 
+ 			pagefn(&stat, rel, blkno);
+ 		}
+ 	}
+ 
+ 	relation_close(rel, AccessShareLock);
+ 
+ 	return build_pgstattuple_type(&stat, fcinfo);
+ }
+ 
+ /*
+  * pgstat_index_page -- for generic index page
+  */
+ static void
+ pgstat_index_page(pgstattuple_type *stat, Page page,
+ 				  OffsetNumber minoff, OffsetNumber maxoff)
+ {
+ 	OffsetNumber i;
+ 
+ 	stat->free_space += PageGetFreeSpace(page);
+ 
+ 	for (i = minoff; i <= maxoff; i = OffsetNumberNext(i))
+ 	{
+ 		ItemId		itemid = PageGetItemId(page, i);
+ 
+ 		if (ItemIdIsDead(itemid))
+ 		{
+ 			stat->dead_tuple_count++;
+ 			stat->dead_tuple_len += ItemIdGetLength(itemid);
+ 		}
+ 		else
+ 		{
+ 			stat->tuple_count++;
+ 			stat->tuple_len += ItemIdGetLength(itemid);
+ 		}
+ 	}
+ }
diff --git a/src/extension/pgstattuple/pgstattuple.control b/src/extension/pgstattuple/pgstattuple.control
index ...7b5129b .
*** a/src/extension/pgstattuple/pgstattuple.control
--- b/src/extension/pgstattuple/pgstattuple.control
***************
*** 0 ****
--- 1,5 ----
+ # pgstattuple extension
+ comment = 'show tuple-level statistics'
+ default_version = '1.0'
+ module_pathname = '$libdir/pgstattuple'
+ relocatable = true
diff --git a/src/extension/pgstattuple/sql/pgstattuple.sql b/src/extension/pgstattuple/sql/pgstattuple.sql
index ...2fd1152 .
*** a/src/extension/pgstattuple/sql/pgstattuple.sql
--- b/src/extension/pgstattuple/sql/pgstattuple.sql
***************
*** 0 ****
--- 1,17 ----
+ CREATE EXTENSION pgstattuple;
+ 
+ --
+ -- It's difficult to come up with platform-independent test cases for
+ -- the pgstattuple functions, but the results for empty tables and
+ -- indexes should be that.
+ --
+ 
+ create table test (a int primary key);
+ 
+ select * from pgstattuple('test'::text);
+ select * from pgstattuple('test'::regclass);
+ 
+ select * from pgstatindex('test_pkey');
+ 
+ select pg_relpages('test');
+ select pg_relpages('test_pkey');
#14Thom Brown
thom@linux.com
In reply to: Greg Smith (#13)
Re: Core Extensions relocation

On 14 November 2011 09:08, Greg Smith <greg@2ndquadrant.com> wrote:

I've revived the corpose of the patch submitted in May, now that it's a much
less strange time of the development cycle to consider it.
 http://archives.postgresql.org/message-id/4DF048BD.8040302@2ndquadrant.com
was the first attempt to move some extensions from contrib/ to a new
src/extension/ directory.  I have fixed the main complaints from the last
submit attempt, that I accidentally grabbed some old makesfiles and CVS
junk.  The new attempt is attached, and is easiest to follow with the a diff
view that understands "moved a file", like github's:
 https://github.com/greg2ndQuadrant/postgres/compare/master...core-extensions

You can also check out the docs changes done so far at
http://www.highperfpostgres.com/docs/html/extensions.html  I reorganized the
docs to break out what I decided to tentatively name "Core Extensions" into
their own chapter.  They're no longer mixed in with the rest of the contrib
modules, and I introduce them a bit differently.  I'm not completely happy
on the wordering there yet.  The use of both "modules" and "extensions" is
probably worth eliminating, and maybe that continues on to doing that
against the language I swiped from the contrib intro too.  There's also a
lot of shared text at the end there, common wording from that and the
contrib page about how to install and migrate these extensions.  Not sure
how to refactor it out into another section cleanly though.

I'm all for removing all mention of "modules". It's ambiguous and
used inconsistently.

In my previous post in this area
(http://archives.postgresql.org/pgsql-hackers/2011-10/msg00781.php) I
suggested that bundling tools, libraries and extensions together in
the same category is confusing. So those are still a problem for me.
And auto_explain appears in your new "Core Extensions" section, but
it's not an extension in the terminology PostgreSQL uses, so that's
also potentially confusing.

--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#15Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Thom Brown (#14)
Re: Core Extensions relocation

Thom Brown <thom@linux.com> writes:

I'm all for removing all mention of "modules". It's ambiguous and
used inconsistently.

The module is the shared library object. It should be possible to use
that consistently. And I have some plans on my TODO list about them
anyway, so making them disappear from the manual would not serve my
later plans :)

And auto_explain appears in your new "Core Extensions" section, but
it's not an extension in the terminology PostgreSQL uses, so that's
also potentially confusing.

This is a related problem, we should have a terminology for contrib
tools such as pg_standby or pg_archivecleanup, for modules like the one
you talk about, that provide new features but nothing visible from SQL,
and extensions, that are all about SQL --- and if I can work on my plans
will get even more about SQL in a near future.

It's too late for me today to contribute nice ideas here though.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

#16Josh Berkus
josh@agliodbs.com
In reply to: Dimitri Fontaine (#15)
Re: Core Extensions relocation

This is a related problem, we should have a terminology for contrib
tools such as pg_standby or pg_archivecleanup, for modules like the one
you talk about, that provide new features but nothing visible from SQL,
and extensions, that are all about SQL --- and if I can work on my plans
will get even more about SQL in a near future.

I see nothing wrong with "Tools" and "Extensions". I'm not sure that
having one catch-all name for them serves the user.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

#17Josh Berkus
josh@agliodbs.com
In reply to: Greg Smith (#13)
Re: Core Extensions relocation

Greg,

So I'm a bit unclear on why most of the optional data types were
excluded from your list of Core Extensions. I would regard the
following as stable and of general utility:

btree_gin
btree_gist
citext
dblink
file_fdw
fuzzystrmatch
hstore
intarray
isn
ltree
pgcrypto
pg_trgm
unaccent
uuid-ossp

These should, in my opinion, all be Core Extensions. I'd go further to
say that if something is materially an extension (as opposed to a tool
or a code example), and we're shipping it with the core distribution, it
either ought to be a core extension, or it should be kicked out to PGXN.

Am I completely misunderstanding what you're trying to accomplish here?

... also, why is there still a "tsearch2" contrib module around at all?

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

#18Thom Brown
thom@linux.com
In reply to: Josh Berkus (#17)
Re: Core Extensions relocation

On 15 November 2011 00:56, Josh Berkus <josh@agliodbs.com> wrote:

Greg,

So I'm a bit unclear on why most of the optional data types were
excluded from your list of Core Extensions.  I would regard the
following as stable and of general utility:

btree_gin
btree_gist
citext
dblink
file_fdw
fuzzystrmatch
hstore
intarray
isn
ltree
pgcrypto
pg_trgm
unaccent
uuid-ossp

Greg clarified on the core extensions page text:

"These core extensions supply useful features in areas such as
database diagnostics and performance monitoring."

None of those others perform such a role. Instead they add additional
functionality intended to be utilised as part of general data usage,
adding new types, operators, query functions etc. Maybe the term
"core" is inappropriate. Instead we might wish to refer to them as
"utility extensions" or something like that, although that may be just
as vague.

... also, why is there still a "tsearch2" contrib module around at all?

Backwards compatibility. No-one will use it except if they're coming
from an older version.

--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#19Peter Geoghegan
peter@2ndquadrant.com
In reply to: Josh Berkus (#17)
Re: Core Extensions relocation

On 15 November 2011 00:56, Josh Berkus <josh@agliodbs.com> wrote:

So I'm a bit unclear on why most of the optional data types were
excluded from your list of Core Extensions.  I would regard the
following as stable and of general utility:

isn

I consider contrib/isn to be quite broken. It hard codes ISBN prefixes
for the purposes of sanitising ISBNs, even though their assignment is
actually controlled by a decentralised body of regional authorities.
I'd vote for kicking it out of contrib.

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

#20Greg Smith
greg@2ndQuadrant.com
In reply to: Josh Berkus (#17)
Re: Core Extensions relocation

On 11/14/2011 07:56 PM, Josh Berkus wrote:

So I'm a bit unclear on why most of the optional data types were
excluded from your list of Core Extensions.

I was aiming for the extensions that seemed uncontroversial for a first
pass here. One of the tests I applied was "do people sometimes need
this module after going into production with their application?" The
very specific problem I was most concerned about eliminating was people
discovering they needed an extension to troubleshoot performance or
corruption issues, only to discover it wasn't available--because they
hadn't installed the postgresql-contrib package. New package
installation can be a giant pain to get onto a production system in some
places, if it wasn't there during QA etc.

All of the data type extensions fail that test. If you need one of
those, you would have discovered that on your development server, and
made sure the contrib package was available on production too. There
very well may be some types that should be rolled into the core
extensions list, but I didn't want arguments over that to block moving
forward with the set I did suggest. We can always move more of them
later, if this general approach is accepted. It only takes about 5
minutes per extension to move them from contrib to src/extension, once
the new directory tree and doc section is there. But I didn't want to
do the work of moving another 15 of them if the whole idea was going to
get shot down.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

#21Robert Haas
robertmhaas@gmail.com
In reply to: Greg Smith (#20)
Re: Core Extensions relocation

On Mon, Nov 14, 2011 at 8:44 PM, Greg Smith <greg@2ndquadrant.com> wrote:

On 11/14/2011 07:56 PM, Josh Berkus wrote:

So I'm a bit unclear on why most of the optional data types were
excluded from your list of Core Extensions.

I was aiming for the extensions that seemed uncontroversial for a first pass
here.  One of the tests I applied was "do people sometimes need this module
after going into production with their application?"  The very specific
problem I was most concerned about eliminating was people discovering they
needed an extension to troubleshoot performance or corruption issues, only
to discover it wasn't available--because they hadn't installed the
postgresql-contrib package.  New package installation can be a giant pain to
get onto a production system in some places, if it wasn't there during QA
etc.

All of the data type extensions fail that test.  If you need one of those,
you would have discovered that on your development server, and made sure the
contrib package was available on production too.  There very well may be
some types that should be rolled into the core extensions list, but I didn't
want arguments over that to block moving forward with the set I did suggest.
 We can always move more of them later, if this general approach is
accepted.  It only takes about 5 minutes per extension to move them from
contrib to src/extension, once the new directory tree and doc section is
there.  But I didn't want to do the work of moving another 15 of them if the
whole idea was going to get shot down

I continue to think that we should be trying to sort these by subject
matter. The term "core extensions" doesn't convey that these are
server management and debugging tools, hence Josh's confusion.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#22Greg Smith
greg@2ndQuadrant.com
In reply to: Robert Haas (#21)
Re: Core Extensions relocation

On 11/14/2011 10:09 PM, Robert Haas wrote:

I continue to think that we should be trying to sort these by subject
matter. The term "core extensions" doesn't convey that these are
server management and debugging tools, hence Josh's confusion.

I'm not attached to the name, which I just pulled out of the air for the
documentation. Could just as easily call them built-in modules or
extensions. If the objection is that "extensions" isn't technically
correct for auto-explain, you might call them core add-ons instead. My
thinking was that the one exception didn't make it worth the trouble to
introduce a new term altogether here. There's already too many terms
used for talking about this sort of thing, the confusion from using a
word other than "extensions" seemed larger than the confusion sown by
auto-explain not fitting perfectly.

The distinction I care about here is primarily a packaging one. These
are server additions that people should be able to count on having
available, whereas right now they may or may not be installed depending
on if contrib was added. Everything I'm touching requires our RPM and
Debian packagers (at least) make a packaging change, too. I can't
justify why that's worth doing for any of the other extensions, which is
one reason I don't try to tackle them.

The type of finer sorting you and Thom are suggesting seems like it's
mainly a documentation change to me. I'm indifferent to the idea; no
plans to either work on it or object to it. The docs could be made
easier to follow here without any change to the directory tree, and
trying to push out a larger packaging change has downsides. Useful
reminder reading here is
http://wiki.postgresql.org/wiki/PgCon_2011_Developer_Meeting#Moving_Contrib_Around
To quote from there, "Users hate having loads and loads of packages. We
do need to be careful not to oversplit it." There's some useful notes
about dependency issues there too.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

#23Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Thom Brown (#18)
Re: Core Extensions relocation

Thom Brown <thom@linux.com> writes:

None of those others perform such a role. Instead they add additional
functionality intended to be utilised as part of general data usage,
adding new types, operators, query functions etc. Maybe the term
"core" is inappropriate. Instead we might wish to refer to them as
"utility extensions" or something like that, although that may be just
as vague.

The term “core” here intends to show off that those extensions are
maintained by the core PostgreSQL developer team. If needs be, those
extensions will get updated in minor releases (crash, bugs, security,
etc).

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

#24Joshua Berkus
josh@agliodbs.com
In reply to: Greg Smith (#22)
Re: Core Extensions relocation

Greg,

I'm not attached to the name, which I just pulled out of the air for
the
documentation. Could just as easily call them built-in modules or
extensions. If the objection is that "extensions" isn't technically
correct for auto-explain, you might call them core add-ons instead.
My
thinking was that the one exception didn't make it worth the trouble
to
introduce a new term altogether here. There's already too many terms
used for talking about this sort of thing, the confusion from using a
word other than "extensions" seemed larger than the confusion sown by
auto-explain not fitting perfectly.

Well, I do think it should be *something* Extensions. But Core Extensions implies that the other stuff is just random code, and makes the user wonder why it's included at all. If we're going to rename some of the extensions, then we really need to rename them all or we look like those are being depreciated.

Maybe:

Core Management Extensions
Core Development Extensions
Additional Database Tools
Code Examples
Legacy Modules

I think that covers everything we have in contrib.

Given discussion, is there any point in reporting on the actual patch yet?

--Josh Berkus

#25Joshua Berkus
josh@agliodbs.com
In reply to: Peter Geoghegan (#19)
Re: Core Extensions relocation

Peter,

I consider contrib/isn to be quite broken. It hard codes ISBN
prefixes
for the purposes of sanitising ISBNs, even though their assignment is
actually controlled by a decentralised body of regional authorities.
I'd vote for kicking it out of contrib.

Submit a patch to fix it then.

I use ISBN in 2 projects, and it's working fine for me. I'll strongly resist any attempt to "kick it out".

--Josh Berkus

#26Robert Haas
robertmhaas@gmail.com
In reply to: Joshua Berkus (#25)
Re: Core Extensions relocation

On Tue, Nov 15, 2011 at 12:54 PM, Joshua Berkus <josh@agliodbs.com> wrote:

I consider contrib/isn to be quite broken. It hard codes ISBN
prefixes
for the purposes of sanitising ISBNs, even though their assignment is
actually controlled by a decentralised body of regional authorities.
I'd vote for kicking it out of contrib.

Submit a patch to fix it then.

It's not fixable. The ISBN datatype is the equivalent of having an
SSN datatype that only allows SSNs that have actually been assigned to
a US citizen.

I use ISBN in 2 projects, and it's working fine for me.  I'll strongly resist any attempt to "kick it out".

That's exactly why contrib is a random amalgamation of really useful
stuff and utter crap: people feel justified in defending the continued
existence of the crap on the sole basis that it's useful to them
personally.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#27Greg Smith
greg@2ndQuadrant.com
In reply to: Joshua Berkus (#24)
Re: Core Extensions relocation

On 11/15/2011 12:53 PM, Joshua Berkus wrote:

Given discussion, is there any point in reporting on the actual patch yet?

I don't expect the discussion will alter the code changes that are the
main chunk of the patch here. The only place the most disputed parts
impact is the documentation.

I like "Management Extensions" as an alternate name for this category
instead, even though it still has the issue that auto_explain isn't
technically an extension. The name does help suggest why they're thrown
into a different directory and package.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

#28Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#26)
Re: Core Extensions relocation

Robert Haas wrote:

I use ISBN in 2 projects, and it's working fine for me. ?I'll strongly resist any attempt to "kick it out".

That's exactly why contrib is a random amalgamation of really useful
stuff and utter crap: people feel justified in defending the continued
existence of the crap on the sole basis that it's useful to them
personally.

Agreed. Berkus must have one million customers to have X customers
using every feature we want to remove or change.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#29Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#26)
Re: Core Extensions relocation

Robert Haas <robertmhaas@gmail.com> wrote:

Joshua Berkus <josh@agliodbs.com> wrote:

I consider contrib/isn to be quite broken. It hard codes ISBN
prefixes for the purposes of sanitising ISBNs, even though their
assignment is actually controlled by a decentralised body of
regional authorities.

By an international standard which says what numbers are valid in
the "prefix element" and "registration group element" of the ISBN
for each of those regional authorities, and how the check digit is
to be calculated.

I'd vote for kicking it out of contrib.

Submit a patch to fix it then.

It's not fixable. The ISBN datatype is the equivalent of having
an SSN datatype that only allows SSNs that have actually been
assigned to a US citizen.

Certainly it would make sense to go so far as to support the overall
standard format as described here:

http://www.isbn-international.org/faqs/view/5#q_5

Beyond the broad strokes there, perhaps it would make sense for the
type to be able to digest a RangeMessage.xml file supplied by the
standards organization, so that the current ranges could be plugged
in as needed independently of the PostgreSQL release.

http://www.isbn-international.org/page/ranges
http://www.isbn-international.org/pages/media/Range%20message/RangeMessage.pdf

Hard-coding ranges as of some moment in time seems pretty dubious.

-Kevin

#30Josh Berkus
josh@agliodbs.com
In reply to: Robert Haas (#26)
Re: ISN was: Core Extensions relocation

Submit a patch to fix it then.

It's not fixable. The ISBN datatype is the equivalent of having an
SSN datatype that only allows SSNs that have actually been assigned to
a US citizen.

Nothing is "not fixable". "not fixable without breaking backwards
compatibility" is entirely possible, though. If fixing it led to two
different versions of ISN, then that would be a reason to push it to
PGXN instead of shipping it.

It's not as if ISN is poorly coded. This is a spec issue, which must
have been debated when we first included it. No?

That's exactly why contrib is a random amalgamation of really useful
stuff and utter crap: people feel justified in defending the continued
existence of the crap on the sole basis that it's useful to them
personally.

Why else would we justify anything? It's very difficult to argue on the
basis of theoretical users. How would we really know what a theoretical
user wants?

Calling something "crap" because it has a spec issue is unwarranted.
We're going to get nowhere in this discussion as long as people are
using extreme and non-descriptive terms.

The thing is, most of the extensions in /contrib have major flaws, or
they would have been folded in to the core code by now. CITEXT doesn't
support multiple collations. INTARRAY and LTREE have inconsistent
operators and many bugs. CUBE lacks documentation. DBlink is an
ongoing battle with security holes. etc.

Picking out one of those and saying "this is crap because of reason X,
but I'll ignore all the flaws in all these other extensions" is
inconsistent and not liable to produce results. Now, if you wanted to
argue that we should kick *all* of the portable extensions out of
/contrib and onto PGXN, then you'd have a much stronger argument.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

#31Peter Geoghegan
peter@2ndquadrant.com
In reply to: Robert Haas (#26)
Re: Core Extensions relocation

On 15 November 2011 18:03, Robert Haas <robertmhaas@gmail.com> wrote:

It's not fixable.  The ISBN datatype is the equivalent of having an
SSN datatype that only allows SSNs that have actually been assigned to
a US citizen.

That isn't even the worst part. isn is basically only useful in the US
at the moment, because in every other country there are a number of
bar code symbologies that co-exist within supply chain management for
various reasons. Only in the US can you reasonably assume that all
articles that you come across will be UPC, and even that might be a
shaky assumption these days.

In the E.U. and much of the rest of the world, products that you buy
in the store will bear one of the following symbologies: EAN-8 (for
small articles like chewing gum), UPC (the American one, 12 digits),
EAN-13 and GTIN-14. Some, but not all of these are available from
contrib/isn. There is no datatype that represents "some known barcode
format", even though writing an SQL function that can be used in a
domain check constraint to do just that is next to trivial. I guess
that means that you'd either have to have multiple columns for each
datatype, each existing in case the article in question was of that
particular datatype, or you'd need to make a hard assumption about the
symbology used for all articles that will ever be entered.

The way that these formats maintain backwards compatibility is through
making previous formats valid as the new format by padding zeros to
the left of the older format. So a UPC is actually a valid EAN-13,
just by adding a zero to the start - the US EAN country code is zero,
IIRC, so the rest of the world can pretend that Americans use
EAN-13s/GTIN-14s, even though they think that they use UPCs. The check
digit algorithms for each successive symbology are built such that
this works. This is why a DIY bar code bigint domain can be written so
easily, and also why doing so makes way more sense than using this
contrib module.

To my mind, contrib/isn is a solution looking for a problem, and
that's before we even talk about ISBN prefixes. By including it, we
give users a false sense of security about doing the right thing, when
they're very probably not.

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

#32Alvaro Herrera
alvherre@commandprompt.com
In reply to: Robert Haas (#26)
Re: Core Extensions relocation

Excerpts from Robert Haas's message of mar nov 15 15:03:03 -0300 2011:

On Tue, Nov 15, 2011 at 12:54 PM, Joshua Berkus <josh@agliodbs.com> wrote:

I consider contrib/isn to be quite broken. It hard codes ISBN
prefixes
for the purposes of sanitising ISBNs, even though their assignment is
actually controlled by a decentralised body of regional authorities.
I'd vote for kicking it out of contrib.

Submit a patch to fix it then.

It's not fixable. The ISBN datatype is the equivalent of having an
SSN datatype that only allows SSNs that have actually been assigned to
a US citizen.

Interesting. Isn't it possible to separate it into two parts, one
containing generic input, formatting and check digits verification, and
another one that would do the prefix matching? Presumably, the first
part would still be useful without prefix validation, wouldn't it?
Surely the other validation rules aren't different for the various
prefixes.

Perhaps the prefix matching code should not be in C, or at least use a
lookup table that's not in C code, so that it can be updated in userland
without having to recompile. (BTW, this is very similar to the problem
confronted by the E.164 module, which attempts to do telephone number
validation and formatting; there's a generic body of code that handles
the basic country code validation, but there's supposed to be some
per-CC formatting rules which we're not really clear on where to store.
We punted on it by just having that in a GUC, so that the user can
update it easily; but that's clearly not the best solution).

--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

#33Peter Geoghegan
peter@2ndquadrant.com
In reply to: Josh Berkus (#30)
Re: ISN was: Core Extensions relocation

On 15 November 2011 19:01, Josh Berkus <josh@agliodbs.com> wrote:

Nothing is "not fixable".

My idea of fixing contrib/isn would be to remove so much of it that it
would obviously make more sense to use simple, flexible SQL. It simply
makes way too many assumptions about the user's business rules for a
generic C module.

Calling something "crap" because it has a spec issue is unwarranted.
We're going to get nowhere in this discussion as long as people are
using extreme and non-descriptive terms.

It is warranted, because contrib/isn is extremely wrong-headed.

The thing is, most of the extensions in /contrib have major flaws, or
they would have been folded in to the core code by now.  CITEXT doesn't
support multiple collations.  INTARRAY and LTREE have inconsistent
operators and many bugs.  CUBE lacks documentation.  DBlink is an
ongoing battle with security holes.  etc.

The difference is that it's possible to imagine a scenario under which
I could recommend using any one of those modules, despite their flaws.
I could not contrive a reason for using contrib/isn.

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

#34Robert Haas
robertmhaas@gmail.com
In reply to: Josh Berkus (#30)
Re: ISN was: Core Extensions relocation

On Tue, Nov 15, 2011 at 2:01 PM, Josh Berkus <josh@agliodbs.com> wrote:

Submit a patch to fix it then.

It's not fixable.  The ISBN datatype is the equivalent of having an
SSN datatype that only allows SSNs that have actually been assigned to
a US citizen.

Nothing is "not fixable".  "not fixable without breaking backwards
compatibility" is entirely possible, though.  If fixing it led to two
different versions of ISN, then that would be a reason to push it to
PGXN instead of shipping it.

Well, the way to fix it would be to publish a new version of
PostgreSQL every time the international authority that assigns ISBN
prefixes allocates a new one, and for everyone to then update their
PostgreSQL installation every time we do that. That doesn't, however,
seem very practical. It just doesn't make sense to define a datatype
where the set of legal values changes over time. The right place to
put such constraints in the application logic, where it doesn't take a
database upgrade to fix the problem.

It's not as if ISN is poorly coded. This is a spec issue, which must
have been debated when we first included it. No?

I think our standards for inclusion have changed over the years - a
necessary part of project growth, or we would have 500 contrib modules
instead of 50. The original isbn_issn module was contributed in 1998;
it was replaced by contrib/isn in 2006 because the original module
became obsolete. I think it's fair to say that if this code were
submitted today it would never get accepted into our tree, and the
submitter would be advised not so much to publish on PGXN instead as
to throw it away entirely and rethink their entire approach to the
problem.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#35Kevin Grittner
Kevin.Grittner@wicourts.gov
In reply to: Robert Haas (#34)
Re: ISN was: Core Extensions relocation

Robert Haas <robertmhaas@gmail.com> wrote:

Josh Berkus <josh@agliodbs.com> wrote:

Nothing is "not fixable". "not fixable without breaking
backwards compatibility" is entirely possible, though. If fixing
it led to two different versions of ISN, then that would be a
reason to push it to PGXN instead of shipping it.

Well, the way to fix it would be to publish a new version of
PostgreSQL every time the international authority that assigns
ISBN prefixes allocates a new one, and for everyone to then update
their PostgreSQL installation every time we do that. That
doesn't, however, seem very practical.

Having just taken a closer look at contrib/isn, I'm inclined to
think the current implementation is pretty hopeless. ISBN seems
common enough and standardized enough that it could perhaps be
included in contrib with the proviso that ranges would only be
validated through pointing to a copy of the XML provided by the
standards body -- it wouldn't be up to PostgreSQL to supply that.

The other types in contrib/isn are things I don't know enough about
to have an opinion, but it seems a little odd to shove them all
together. It would seem more natural to me to have a distinct type
for each and, if needed, figure out how to get a clean union of the
types.

-Kevin

#36Tom Lane
tgl@sss.pgh.pa.us
In reply to: Josh Berkus (#30)
Re: ISN was: Core Extensions relocation

Josh Berkus <josh@agliodbs.com> writes:

The thing is, most of the extensions in /contrib have major flaws, or
they would have been folded in to the core code by now. CITEXT doesn't
support multiple collations. INTARRAY and LTREE have inconsistent
operators and many bugs. CUBE lacks documentation. DBlink is an
ongoing battle with security holes. etc.

Picking out one of those and saying "this is crap because of reason X,
but I'll ignore all the flaws in all these other extensions" is
inconsistent and not liable to produce results. Now, if you wanted to
argue that we should kick *all* of the portable extensions out of
/contrib and onto PGXN, then you'd have a much stronger argument.

There's a larger issue here, which is that a lot of the stuff in contrib
is useful as (a) coding examples for people to look at, and/or (b) test
cases for core-server functionality. If a module gets kicked out to
PGXN we lose pretty much all the advantages of (b), and some of the
value of (a) because stuff that is in the contrib tree at least gets
maintained when we make server API changes.

So the fact that a module has conceptual flaws that are preventing it
from getting moved into core doesn't really have much to do with whether
we should remove it, IMV. The big question to be asking about ISN is
whether removing it would result in a loss of test coverage for the core
server; and I believe the answer is yes --- ISTR at least one bug that
we found specifically because ISN tickled it.

regards, tom lane

#37Peter Geoghegan
peter@2ndquadrant.com
In reply to: Tom Lane (#36)
Re: ISN was: Core Extensions relocation

On 15 November 2011 21:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:

There's a larger issue here, which is that a lot of the stuff in contrib
is useful as (a) coding examples for people to look at, and/or (b) test
cases for core-server functionality.  If a module gets kicked out to
PGXN we lose pretty much all the advantages of (b), and some of the
value of (a) because stuff that is in the contrib tree at least gets
maintained when we make server API changes.

The isn module is patently broken. It has the potential to damage the
project's reputation if someone chooses to make an example out of it.
I think that that's more important than any additional test coverage
it may bring. There's only a fairly marginal benefit at the expense of
a bad user experience for anyone who should use isn.

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

#38Robert Haas
robertmhaas@gmail.com
In reply to: Peter Geoghegan (#37)
Re: ISN was: Core Extensions relocation

On Tue, Nov 15, 2011 at 6:44 PM, Peter Geoghegan <peter@2ndquadrant.com> wrote:

On 15 November 2011 21:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:

There's a larger issue here, which is that a lot of the stuff in contrib
is useful as (a) coding examples for people to look at, and/or (b) test
cases for core-server functionality.  If a module gets kicked out to
PGXN we lose pretty much all the advantages of (b), and some of the
value of (a) because stuff that is in the contrib tree at least gets
maintained when we make server API changes.

The isn module is patently broken. It has the potential to damage the
project's reputation if someone chooses to make an example out of it.
I think that that's more important than any additional test coverage
it may bring. There's only a fairly marginal benefit at the expense of
a bad user experience for anyone who should use isn.

I agree. The argument that this code is useful as example code has
been offered before, but the justification is pretty thin when the
example code is an example of a horrible design that no one should
ever copy. I don't see that the isn code is doing anything that is so
unique that one of our add-on data types wouldn't be a suitable
(probably far better) template, but if it is, let's add similar
functionality to some other module, or add a new module that does
whatever that interesting thing is, or shove some code in
src/test/examples. We can't go on complaining one the one hand that
people don't install postgresql-contrib, and then on the other hand
insist on shipping really bad code in contrib. It's asking a lot for
someone who isn't already heavily involved in the project to
distinguish the wheat from the chaff.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#39Peter Geoghegan
peter@2ndquadrant.com
In reply to: Robert Haas (#38)
Re: ISN was: Core Extensions relocation

On 15 November 2011 23:57, Robert Haas <robertmhaas@gmail.com> wrote:

We can't go on complaining one the one hand that
people don't install postgresql-contrib, and then on the other hand
insist on shipping really bad code in contrib.  It's asking a lot for
someone who isn't already heavily involved in the project to
distinguish the wheat from the chaff.

ISTM that any sensible sanitisation of data guards against Murphy, not
Machiavelli. Exactly what sort of error is it imagined will be
prevented by enforcing that ISBN prefixes are "valid"? Transcription
and transposition errors are already guarded against very effectively
by the check digit.

If we can't put isn out of its misery, a sensible compromise would be
to rip out the prefix enforcement feature so that, for example, ISBN13
behaved exactly the same as EAN13. That would at least result in
predictable behaviour. The "strike" that separates each part of the
ISBN would be lost, but it is actually not visible on large ISBN
websites, presumably because it's a tar-pit of a problem.

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

#40Joshua Berkus
josh@agliodbs.com
In reply to: Robert Haas (#38)
Re: ISN was: Core Extensions relocation

All,

I agree. The argument that this code is useful as example code has
been offered before, but the justification is pretty thin when the
example code is an example of a horrible design that no one should
ever copy.

People are already using ISN (or at least ISBN) in production. It's been around for 12 years. So any step we take with contrib/ISN needs to take that into account -- just as we have with Tsearch2 and XML2.

One can certainly argue that some of the stuff in /contrib would be better on PGXN. But in that case, it's not limited to ISN; there are several modules of insufficient quality (including intarray and ltree) or legacy nature which ought to be pushed out. Probably most of them.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
San Francisco

#41Peter Geoghegan
peter@2ndquadrant.com
In reply to: Joshua Berkus (#40)
Re: ISN was: Core Extensions relocation

On 16 November 2011 01:09, Joshua Berkus <josh@agliodbs.com> wrote:

People are already using ISN (or at least ISBN) in production.  It's been around for 12 years.

contrib/isn has been around since 2006. The argument "some unknowable
number of people are using this feature in production" could equally
well apply to anything that we might consider deprecating.

I am not arguing for putting isn on PGXN. I'm arguing for actively
warning people against using it, because it is harmful. Any serious
use of the ISBN datatypes can be expected to break unpredictably one
day, and the only thing that someone can do in that situation is to
write their own patch to contrib/isn. They'd then have to wait for
that patch to be accepted if they didn't want to fork, which is a very
bad situation indeed. This already happened once.

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

#42Greg Smith
greg@2ndQuadrant.com
In reply to: Joshua Berkus (#25)
Re: Core Extensions relocation

Well, this discussion veering off into ISN has certainly validated my
gut feel that I should touch only the minimum number of things that
kills my pain points, rather than trying any more ambitious
restructuring. I hope that packaged extensions become so popular that
some serious cutting can happen to contrib, especially the data type
additions. If something as big as PostGIS can live happily as an
external project, surely most of these can too.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

#43Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#39)
Re: ISN was: Core Extensions relocation

Peter Geoghegan <peter@2ndquadrant.com> writes:

If we can't put isn out of its misery, a sensible compromise would be
to rip out the prefix enforcement feature so that, for example, ISBN13
behaved exactly the same as EAN13.

That might be a reasonable compromise. Certainly the check-digit
calculation is much more useful for validity checking than the prefix
checking.

regards, tom lane

#44Robert Haas
robertmhaas@gmail.com
In reply to: Dimitri Fontaine (#23)
Re: Core Extensions relocation

On Tue, Nov 15, 2011 at 5:50 AM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:

Thom Brown <thom@linux.com> writes:

None of those others perform such a role.  Instead they add additional
functionality intended to be utilised as part of general data usage,
adding new types, operators, query functions etc.  Maybe the term
"core" is inappropriate.  Instead we might wish to refer to them as
"utility extensions" or something like that, although that may be just
as vague.

The term “core” here intends to show off that those extensions are
maintained by the core PostgreSQL developer team. If needs be, those
extensions will get updated in minor releases (crash, bugs, security,
etc).

Everything in contrib meets that definition, more or less.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#45Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Robert Haas (#44)
Re: Core Extensions relocation

Robert Haas <robertmhaas@gmail.com> writes:

The term “core” here intends to show off that those extensions are
maintained by the core PostgreSQL developer team. If needs be, those
extensions will get updated in minor releases (crash, bugs, security,
etc).

Everything in contrib meets that definition, more or less.

Yeah? It would only mean that Josh Berkus complaint about the naming is
to be followed.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

#46Robert Haas
robertmhaas@gmail.com
In reply to: Dimitri Fontaine (#45)
Re: Core Extensions relocation

On Wed, Nov 16, 2011 at 9:20 AM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:

Robert Haas <robertmhaas@gmail.com> writes:

The term “core” here intends to show off that those extensions are
maintained by the core PostgreSQL developer team. If needs be, those
extensions will get updated in minor releases (crash, bugs, security,
etc).

Everything in contrib meets that definition, more or less.

Yeah? It would only mean that Josh Berkus complaint about the naming is
to be followed.

I am not sure I'm quite following you, but I'm unaware that there are
some contrib modules that we maintain more than others. Bugs and
security vulnerabilities in any of them are typically fixed when
reported. Now, sometimes we might not be able to fix a bug because of
some architectural deficiency, but that also happens in the server -
consider, for example, the recent discussion of creating a table in a
schema that is concurrently being dropped, which is likely to require
far more invasive fixing than we are probably willing to do anywhere
other than master.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#47Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#43)
Re: ISN was: Core Extensions relocation

On 11/15/11 7:40 PM, Tom Lane wrote:

Peter Geoghegan <peter@2ndquadrant.com> writes:

If we can't put isn out of its misery, a sensible compromise would be
to rip out the prefix enforcement feature so that, for example, ISBN13
behaved exactly the same as EAN13.

That might be a reasonable compromise. Certainly the check-digit
calculation is much more useful for validity checking than the prefix
checking.

Sounds good to me. FWIW, I know that ISBN is being used for some
library software, so a backwards-compatible fix would be very desirable.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

#48Tom Lane
tgl@sss.pgh.pa.us
In reply to: Josh Berkus (#47)
Re: ISN was: Core Extensions relocation

Josh Berkus <josh@agliodbs.com> writes:

On 11/15/11 7:40 PM, Tom Lane wrote:

Peter Geoghegan <peter@2ndquadrant.com> writes:

If we can't put isn out of its misery, a sensible compromise would be
to rip out the prefix enforcement feature so that, for example, ISBN13
behaved exactly the same as EAN13.

That might be a reasonable compromise. Certainly the check-digit
calculation is much more useful for validity checking than the prefix
checking.

Sounds good to me. FWIW, I know that ISBN is being used for some
library software, so a backwards-compatible fix would be very desirable.

How backwards-compatible do you mean? Do you think we need to keep the
existing prefix-check logic as an option, or can we just drop it and be
done?

(I'm a bit concerned about the angle that the code has some smarts
currently about where to put hyphens in output. If we rip that out
it could definitely break application code, whereas dropping a check
shouldn't.)

regards, tom lane

#49Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#48)
Re: ISN was: Core Extensions relocation

(I'm a bit concerned about the angle that the code has some smarts
currently about where to put hyphens in output. If we rip that out
it could definitely break application code, whereas dropping a check
shouldn't.)

Right. Do we need to dump the hyphen logic?

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

#50Peter Geoghegan
peter@2ndquadrant.com
In reply to: Josh Berkus (#49)
Re: ISN was: Core Extensions relocation

On 17 November 2011 02:32, Josh Berkus <josh@agliodbs.com> wrote:

(I'm a bit concerned about the angle that the code has some smarts
currently about where to put hyphens in output.  If we rip that out
it could definitely break application code, whereas dropping a check
shouldn't.)

Right.  Do we need to dump the hyphen logic?

Only if you think that it's better to have something that covers many
cases but is basically broke. Perhaps it's different for code that is
already committed, but in the case of new submissions it tends to be
better to have something that is limited in a well-understood way
rather than less limited in a way that is unpredictable or difficult
to reason about.

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

#51Tom Lane
tgl@sss.pgh.pa.us
In reply to: Peter Geoghegan (#50)
Re: ISN was: Core Extensions relocation

Peter Geoghegan <peter@2ndquadrant.com> writes:

On 17 November 2011 02:32, Josh Berkus <josh@agliodbs.com> wrote:

Right. Do we need to dump the hyphen logic?

Only if you think that it's better to have something that covers many
cases but is basically broke. Perhaps it's different for code that is
already committed, but in the case of new submissions it tends to be
better to have something that is limited in a well-understood way
rather than less limited in a way that is unpredictable or difficult
to reason about.

Well, as was stated upthread, we might have bounced this module in toto
if it were submitted today. But contrib/isn has been there since 2006,
and its predecessor contrib/isbn_issn was there since 1998, and both of
those submissions came from (different) people who needed the
functionality bad enough to write it. It's not reasonable to suppose
that nobody is using it today. Ergo, we can't just summarily break
backwards compatibility on the grounds that we don't like the design.
Heck, we don't even have a field bug report that the design limitation
is causing any real problems for real users ... so IMO, the claims that
this is dangerously broken are a bit overblown.

regards, tom lane

#52Josh Berkus
josh@agliodbs.com
In reply to: Tom Lane (#51)
Re: ISN was: Core Extensions relocation

On 11/16/11 7:54 PM, Tom Lane wrote:

Heck, we don't even have a field bug report that the design limitation
is causing any real problems for real users ... so IMO, the claims that
this is dangerously broken are a bit overblown.

This is why I mentioned clients using ISBN in production. I have yet to
see any actual breakage in the field. Granted, both clients are
US-only. I don't believe either of these clients is depending on the
prefix-check, and that's the sort of thing we could announce in the
release notes.

I do get the feeling that Peter got burned by ISN, given his vehemence
in erasing it from the face of the earth. So that's one bug report. ;-)

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

#53Peter Geoghegan
peter@2ndquadrant.com
In reply to: Tom Lane (#51)
Re: ISN was: Core Extensions relocation

On 17 November 2011 03:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:

 It's not reasonable to suppose
that nobody is using it today.

I didn't suppose that no one is using it, but that those that are
using it are unaware of the risks with prefix validation, and that
there will be a rude awakening for them.

Ergo, we can't just summarily break
backwards compatibility on the grounds that we don't like the design.
Heck, we don't even have a field bug report that the design limitation
is causing any real problems for real users ... so IMO, the claims that
this is dangerously broken are a bit overblown.

I think that's it's rather unlikely that removing hyphenation and
prefix validation would adversely affect anyone, provided that it was
well documented and wasn't applied to stable branches. If it were up
to me, I might remove validation from stable branches but keep
hyphenation, while removing both for 9.2 . After all, hyphenation will
break anyway, so they're worse off continuing to rely on hyphenation
when it cannot actually be relied on.

On 17 November 2011 05:03, Josh Berkus <josh@agliodbs.com> wrote:

I do get the feeling that Peter got burned by ISN, given his vehemence
in erasing it from the face of the earth.  So that's one bug report.  ;-)

Actually, I reviewed a band-aid patch about a year ago, and I was
fairly concerned at the obvious wrong-headedness of something in our
tree. I have a certain amount of domain knowledge here, but I've never
actually attempted to use it in production. For all its faults, it is
at least obviously broken to someone that knows about GS1 standards
(having separate bar code datatypes is just not useful at all),
although that tends to not be Americans. This may account for why
we've heard so few complaints. It's also really easy and effective to
create your own domain, and the flexibility that this affords tends to
make an off-the-shelf solution unattractive (I've done things like
store "compacted" bar codes that will subsequently be used to match a
full bar code with an embedded price - I didn't want to enforce a
check digit for the compacted representation).

If I had a lot of time to work on fixing contrib/isn, I still
wouldn't, because the correct thing to do is to produce your own
domain.

--
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

#54Robert Haas
robertmhaas@gmail.com
In reply to: Peter Geoghegan (#53)
Re: ISN was: Core Extensions relocation

On Thu, Nov 17, 2011 at 10:44 AM, Peter Geoghegan <peter@2ndquadrant.com> wrote:

I think that's it's rather unlikely that removing hyphenation and
prefix validation would adversely affect anyone, provided that it was
well documented and wasn't applied to stable branches. If it were up
to me, I might remove validation from stable branches but keep
hyphenation, while removing both for 9.2 . After all, hyphenation will
break anyway, so they're worse off continuing to rely on hyphenation
when it cannot actually be relied on.

Well, keep in mind that most people test their code. It seems likely
that it actually DOES work pretty well for the people who are using
the module. The ones for whom it didn't work presumably would have
complained (and, mostly, they haven't) or abandoned the module (in
which case they're irrelevant to the discussion of who might be hurt
by this change). I'd be willing to bet a nickle that we'll get
complaints if we rip that hyphenation behavior out.

At the same time, I still think we should push this out to PGXN or
pgfoundry or something. The fact that it's useful to some people does
not mean that it's a good example for other people to follow, or that
we want the core distribution to be in the process of tracking ISBN
prefixes on behalf of PostgreSQL users everywhere.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#55Tom Lane
tgl@sss.pgh.pa.us
In reply to: Robert Haas (#54)
Re: ISN was: Core Extensions relocation

Robert Haas <robertmhaas@gmail.com> writes:

At the same time, I still think we should push this out to PGXN or
pgfoundry or something. The fact that it's useful to some people does
not mean that it's a good example for other people to follow, or that
we want the core distribution to be in the process of tracking ISBN
prefixes on behalf of PostgreSQL users everywhere.

I wouldn't object to that as long as we replaced it with some other
module that had a similar relationship to core types. We'd never have
realized the need for CREATE TYPE's LIKE option, until it was far too
late, if we'd not had contrib/isn to show up the problem (cf
commit 3f936aacc057e4b391ab953fea2ffb689a12a8bc).

But really I think this discussion is making a mountain out of a
molehill. If tracking ISBN prefix changes is such a time-consuming
activity, why have we not seen a steady stream of update patches from
users? By my count there's been just one such patch since 2006
(commit 6d1af7b2180719102a907bd3e35d218b43e76ad1).

regards, tom lane

#56Peter Eisentraut
peter_e@gmx.net
In reply to: Greg Smith (#20)
Re: Core Extensions relocation

On mån, 2011-11-14 at 20:44 -0500, Greg Smith wrote:

The very specific problem I was most concerned about eliminating was
people discovering they needed an extension to troubleshoot
performance or corruption issues, only to discover it wasn't
available--because they hadn't installed the postgresql-contrib
package.

Who's to say that after this, the core extensions won't end up in a new
separate package postgresql-extensions (or similar) or might even stay
in postgresql-contrib, for compatibility?

#57Greg Smith
greg@2ndQuadrant.com
In reply to: Peter Eisentraut (#56)
Re: Core Extensions relocation

On 11/17/2011 03:18 PM, Peter Eisentraut wrote:

Who's to say that after this, the core extensions won't end up in a new
separate package postgresql-extensions (or similar) or might even stay
in postgresql-contrib, for compatibility?

I don't know why packagers would make an active decision that will make
their lives more difficult, with no benefit to them and a regression
against project recommendations for their users. The last thing anyone
packaging PostgreSQL wants is more packages to deal with; there are
already too many. Each of the current sub-packages has a legitimate
technical or distribution standard reason to exist--guidelines like
"break out client and server components" or "minimize the package
dependencies for the main server". I can't think of any good reason
that would inspire the sort of drift you're concerned about.

There's little compatibility argument beyond consistency with the
previous packages. The reason why this is suddenly feasible now is that
the under the hood change are almost all hidden by CREATE EXTENSION.

And if some wanted to wander this way, they'll end up having to maintain
a doc patch to address the fact that they've broken with project
recommendations. This text in what I submitted will no longer be true:

"This appendix contains information regarding core extensions that are
built and included with a standard installation of PostgreSQL."

One of the reasons I picked the name I did was to contrast with the
existing description of contrib:

"porting tools, analysis utilities, and plug-in features that are not
part of the core PostgreSQL system, mainly because they address a
limited audience or are too experimental to be part of the main source
tree."

That says it's perfectly fine to make these optional in another
package--they're not "part of the core". That scary wording is
practically telling packagers to separate them, so it's easy to keep the
experimental stuff away from the production quality components.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

#58Tom Lane
tgl@sss.pgh.pa.us
In reply to: Greg Smith (#57)
Re: Core Extensions relocation

Greg Smith <greg@2ndQuadrant.com> writes:

On 11/17/2011 03:18 PM, Peter Eisentraut wrote:

Who's to say that after this, the core extensions won't end up in a new
separate package postgresql-extensions (or similar) or might even stay
in postgresql-contrib, for compatibility?

I don't know why packagers would make an active decision that will make
their lives more difficult, with no benefit to them and a regression
against project recommendations for their users.

Why do you figure that, exactly? The path of least resistance will
be precisely to leave everything packaged as it is, in a single
postgresql-contrib module. I'm pretty likely to do that myself for
Fedora and RHEL. Subdividing/rearranging contrib makes the packager's
life more complicated, *and* makes his users' lives more complicated,
if only because things aren't where they were before. It seems unlikely
to happen, at least in the near term.

And if some wanted to wander this way, they'll end up having to maintain
a doc patch to address the fact that they've broken with project
recommendations. This text in what I submitted will no longer be true:

You're assuming anybody will notice or care about that text, if indeed
it gets committed/released with that wording at all.

The upstream project can't force these decisions on packagers, and it
doesn't help to go about under the illusion that we can.

regards, tom lane

#59Dimitri Fontaine
dimitri@2ndQuadrant.fr
In reply to: Tom Lane (#58)
Re: Core Extensions relocation

Tom Lane <tgl@sss.pgh.pa.us> writes:

Why do you figure that, exactly? The path of least resistance will
be precisely to leave everything packaged as it is, in a single
postgresql-contrib module. I'm pretty likely to do that myself for
Fedora and RHEL. Subdividing/rearranging contrib makes the packager's
life more complicated, *and* makes his users' lives more complicated,
if only because things aren't where they were before. It seems unlikely
to happen, at least in the near term.

Then if we want packagers to move, what about removing all the
extensions not listed by Greg from the contrib/ directory and inventing
another place where to manage them, which is not automatically built,
but still part of buildfarm tests, if at all possible.

Then the only change we suggest to packagers is to have the main
PostgreSQL package install the contrib one by means of dependencies.

I don't much like this solution, but that's how I read your email. The
status quo is not a good place to live in.

The upstream project can't force these decisions on packagers, and it
doesn't help to go about under the illusion that we can.

Really? You are packaging for RHEL, Dave is responsible for the windows
packaging, Devrim is covering the other RPM systems (apart from SuSE
maybe and I'm not even sure) and Martin is caring for debian and ubuntu
and following along. We're missing BSD ports packagers, and we're
covering like 90% or more of the servers and developers installs.

We can't force everybody hands to doing it our way, but I'm pretty sure
we can talk to them and see what they think about the usefulness of this
proposal and how they intend to react. We're not *that* disconnected.

Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

#60Josh Berkus
josh@agliodbs.com
In reply to: Dimitri Fontaine (#59)
Re: Core Extensions relocation

On 11/18/11 12:27 PM, Dimitri Fontaine wrote:

Tom Lane <tgl@sss.pgh.pa.us> writes:

Why do you figure that, exactly? The path of least resistance will
be precisely to leave everything packaged as it is, in a single
postgresql-contrib module. I'm pretty likely to do that myself for
Fedora and RHEL. Subdividing/rearranging contrib makes the packager's
life more complicated, *and* makes his users' lives more complicated,
if only because things aren't where they were before. It seems unlikely
to happen, at least in the near term.

Then if we want packagers to move, what about removing all the
extensions not listed by Greg from the contrib/ directory and inventing
another place where to manage them, which is not automatically built,
but still part of buildfarm tests, if at all possible.

Actually, the whole idea is that the "Core Management Extensions" should
move from the -contrib module to the -server module. That is, those
extensions should always get installed with any server.

Of course, packagers may then reasonably ask why that code is not just
part of the core?

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

#61Greg Smith
greg@2ndQuadrant.com
In reply to: Josh Berkus (#60)
Re: Core Extensions relocation

On 11/18/2011 03:36 PM, Josh Berkus wrote:

Of course, packagers may then reasonably ask why that code is not just
part of the core?

Let me step back from the implementation ideas for a minute and talk
about this, and how it ties into release philosophy. The extensions
infrastructure for PostgreSQL is one of its strongest features. We can
use it as a competitive advantage over other databases, one that can
make this database stable and continuously innovating at the same time.
But that's not happening enough yet; I feel this change is a major step
in that direction. There's no demonstration that extensions are edible
dog food like the core database visibly eating a can.

To see why this matters so much, let's compare two stereotypes of
PostgreSQL users at different extremes of upgrade tolerance. First we
have the classic enterprise DBA. Relative to this person's
expectations, PostgreSQL releases are way too fast. They can't upgrade
their database every year; that's madness. This is the person who we
yell at about how they should upgrade to the latest minor point release,
because once they have a working system they touch *nothing*. For this
user, the long beta period of new PostgreSQL releases, and its general
conservative development model, are key components to PostgreSQL being
suitable for them.

At the other extreme, we have the software developer with a frantic
development/release schedule, the one who's running the latest stable
version of every tool they use. This person expects some bugs in them,
and the first reaction to running into one is to ask "is this fixed in
the next version?" You'll find at least one component in their stack
that's labeled "compiled from the latest checkout" because that was the
only way to get a working version. And to them, the yearly release
cycle of PostgreSQL is glacial. When they run into a limitation that is
impacting a user base that's doubling every few months, they can't wait
a year for a fix; they could easily go out of business by then.

The key to satisfying both these extremes at once is a strong extensions
infrastructure, led by the example of serious tools that are provided
that way in the PostgreSQL core. For this to catch on, we need the
classic DBAs to trust core extensions enough to load them into
production. They don't do that now because the current contrib
description sounds too scary, and they may not even have loaded that
package onto the server. And we need people who want more frequent
database core changes to see that extensions are a viable way to build
some pretty extensive server hacks.

I've submitted two changes to this CommitFest that are enhancing
features in this "core extensions" set. Right now I have multiple
customers who are desperate for both of those features. With
extensions, I can give them changes that solve their immediate crisis
right now, almost a full year before they could possibly appear in a
proper release of PostgreSQL. And then I can push those back toward
community PostgreSQL, with any luck landing in the next major version.
Immediate gratification for the person funding development, and peer
reviewed code that goes through a long beta and release cycle. That's
the vision I have for a PostgreSQL that is simultaneously stable and
agile. The easiest way to get there it is to lead by example--by having
extensions that provide necessary, visible components to core, while
still being obviously separate components. That's the best approach for
proving this development model works and is suitable for everyone.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

#62Greg Smith
greg@2ndQuadrant.com
In reply to: Tom Lane (#58)
Re: Core Extensions relocation

On 11/18/2011 09:35 AM, Tom Lane wrote:

Subdividing/rearranging contrib makes the packager's
life more complicated, *and* makes his users' lives more complicated,
if only because things aren't where they were before. It seems unlikely
to happen, at least in the near term.

Users are visibly suffering by the current packaging. Production DBAs
are afraid to install contrib because it's described as untrustworthy.
They are hit by emergencies that the inspection tools would help with,
but cannot get contrib installed easily without root permissions. They
have performance issues that the contrib modules I'm trying to relocate
into the server package would help with, but company policies related to
post-deployment installation mean they cannot use them. They have to
always be installed to make this class of problem go away.

If you feel there are more users that would be negatively impacted by
some directories moving than what I'm describing above, we are a very
fundamental disagreement here. The status quote for all of these
extensions is widely understood to be unusable in common corporate
environments. Packagers should be trying to improve the PostgreSQL
experience, and I'm trying to help with that.

In the face of pushback from packagers, I can roll my own packages that
are designed without this problem; I'm being pushed into doing that
instead of working on community PostgreSQL already. But I'd really
prefer to see this very common problem identified as so important it
should get fixed everywhere instead.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

#63Bruce Momjian
bruce@momjian.us
In reply to: Greg Smith (#61)
Re: Core Extensions relocation

Greg Smith wrote:

I've submitted two changes to this CommitFest that are enhancing
features in this "core extensions" set. Right now I have multiple
customers who are desperate for both of those features. With
extensions, I can give them changes that solve their immediate crisis
right now, almost a full year before they could possibly appear in a
proper release of PostgreSQL. And then I can push those back toward
community PostgreSQL, with any luck landing in the next major version.
Immediate gratification for the person funding development, and peer
reviewed code that goes through a long beta and release cycle. That's
the vision I have for a PostgreSQL that is simultaneously stable and
agile. The easiest way to get there it is to lead by example--by having
extensions that provide necessary, visible components to core, while
still being obviously separate components. That's the best approach for
proving this development model works and is suitable for everyone.

I think a question is how often people are waiting for features that
actually can be addressed in a contrib/plugin way. My gut feeling is
that most missing features have to be added to the server core (e.g.
index-only scans) and are not possible to add in a contrib/plugin way.

I am not saying this would not help, but I am saying that this is going
to address only a small part of the goal of getting features to users
quicker.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#64Bruce Momjian
bruce@momjian.us
In reply to: Peter Geoghegan (#53)
Re: ISN was: Core Extensions relocation

Peter Geoghegan wrote:

On 17 November 2011 03:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:

?It's not reasonable to suppose
that nobody is using it today.

I didn't suppose that no one is using it, but that those that are
using it are unaware of the risks with prefix validation, and that
there will be a rude awakening for them.

Ergo, we can't just summarily break
backwards compatibility on the grounds that we don't like the design.
Heck, we don't even have a field bug report that the design limitation
is causing any real problems for real users ... so IMO, the claims that
this is dangerously broken are a bit overblown.

I think that's it's rather unlikely that removing hyphenation and
prefix validation would adversely affect anyone, provided that it was
well documented and wasn't applied to stable branches. If it were up
to me, I might remove validation from stable branches but keep
hyphenation, while removing both for 9.2 . After all, hyphenation will
break anyway, so they're worse off continuing to rely on hyphenation
when it cannot actually be relied on.

Clarification: Our policy for patching back-branches is that the bug
has to affect many users, be serious, and the fix has to be easily
tested.

For a user-visible change (which this would be), the criteria is even
more strict.

I don't see any of this reaching the level that it needs to be
backpatched, so I think we have to accept that this will be 9.2-only
change.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#65Joshua Berkus
josh@agliodbs.com
In reply to: Bruce Momjian (#64)
Re: ISN was: Core Extensions relocation

Bruce,

I don't see any of this reaching the level that it needs to be
backpatched, so I think we have to accept that this will be 9.2-only
change.

Agreed. If users encounter issues with the prefix in the field, it will be easy enough for them to back-patch. But we don't want to be responsible for it as a project.

--Josh

#66Greg Smith
greg@2ndQuadrant.com
In reply to: Bruce Momjian (#63)
Re: Core Extensions relocation

On 11/21/2011 11:40 AM, Bruce Momjian wrote:

I think a question is how often people are waiting for features that
actually can be addressed in a contrib/plugin way. My gut feeling is
that most missing features have to be added to the server core (e.g.
index-only scans) and are not possible to add in a contrib/plugin way.

Good question; let's talk about 9.0. We were building/distributing
three things for that version that poked into the server:

-Replication monitoring tools that slipped from the 9.0 schedule,
similar to what became pg_stat_replication in 9.1
-An early version of what became hot_standby_feedback in 9.1.
-pg_streamrecv

While these weren't all packaged as extensions per se, all of them used
the PGXS interface. And they all provided deployment blocking features
to early adopters before those features were available in core, in some
cases after the issues they address had been encountered in production
deployments. As I was ranting on my blog recently, I'm seeing more
complaints recently about monitoring and management features--exactly
the sort of thing that you can improve as an extension, and that the
extensions I've proposed provide--than I am over missing big features.

Index-only scans are a good example, as one of the most requested
performance feature you can only get in core (I'd put them at #2 behind
materialized views for the customers I talk to). I wouldn't bet that
they are considered more important by a typical deployment than good
built-in query profiling though. I get complaints about query
monitoring from every single PostgreSQL install, while complaints about
not having index-only scans only come from the bigger installations.
Note how demand is high enough that we have two pg_stat_statements
replacements submitted right now.

--
Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us

#67Bruce Momjian
bruce@momjian.us
In reply to: Greg Smith (#66)
Re: Core Extensions relocation

Greg Smith wrote:

On 11/21/2011 11:40 AM, Bruce Momjian wrote:

I think a question is how often people are waiting for features that
actually can be addressed in a contrib/plugin way. My gut feeling is
that most missing features have to be added to the server core (e.g.
index-only scans) and are not possible to add in a contrib/plugin way.

Good question; let's talk about 9.0. We were building/distributing
three things for that version that poked into the server:

-Replication monitoring tools that slipped from the 9.0 schedule,
similar to what became pg_stat_replication in 9.1
-An early version of what became hot_standby_feedback in 9.1.
-pg_streamrecv

While these weren't all packaged as extensions per se, all of them used
the PGXS interface. And they all provided deployment blocking features
to early adopters before those features were available in core, in some
cases after the issues they address had been encountered in production
deployments. As I was ranting on my blog recently, I'm seeing more
complaints recently about monitoring and management features--exactly
the sort of thing that you can improve as an extension, and that the
extensions I've proposed provide--than I am over missing big features.

Index-only scans are a good example, as one of the most requested
performance feature you can only get in core (I'd put them at #2 behind
materialized views for the customers I talk to). I wouldn't bet that
they are considered more important by a typical deployment than good
built-in query profiling though. I get complaints about query
monitoring from every single PostgreSQL install, while complaints about
not having index-only scans only come from the bigger installations.
Note how demand is high enough that we have two pg_stat_statements
replacements submitted right now.

Agreed much of the edge stuff, e.g. monitoring, can be done as plugins.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ It's impossible for everything to be true. +

#68Robert Haas
robertmhaas@gmail.com
In reply to: Tom Lane (#58)
Re: Core Extensions relocation

On Fri, Nov 18, 2011 at 9:35 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Why do you figure that, exactly?  The path of least resistance will
be precisely to leave everything packaged as it is, in a single
postgresql-contrib module.  I'm pretty likely to do that myself for
Fedora and RHEL.  Subdividing/rearranging contrib makes the packager's
life more complicated, *and* makes his users' lives more complicated,
if only because things aren't where they were before.  It seems unlikely
to happen, at least in the near term.

When we discussed this topic at the developer's meeting, I thought we
had general consensus that it would be a good idea to package a
limited number of important and stable debugging tools with the core
server, and I had the impression that you (Tom) thought this was a
reasonable thing to do. If you don't, or if you did then but don't
now, then it seems to me that this conversation is dead in the water
for so long as you're the one packaging for Red Hat, and we should
just move on; you pretty much have unassailable personal veto power on
this issue. But let's not pretend that the conversation is about what
packagers in general will do, because I don't think it is. I think
it's about what you personally will do.

I think that if we move a few things into src/extension and set things
up such that they get installed even if you just do "make install"
rather than requiring "make install-world", packagers who don't have
any terribly strong personal agenda will decide that means they ought
to be shipped with the server. However, if you're personally
committed to making sure that all of that stuff remains in
postgresql-contrib in Red Hat/Fedora, regardless of where we move it
to on our end, then that's where it's going to be, at least on all Red
Hat-derived systems, which is a big enough chunk of the world to
matter quite a lot. Note that I'm not necessarily saying anything
about whether your reasons for such a decision might be good or bad;
I'm just pointing out that a good deal of our ability to make a change
in this area is within your personal control.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

#69Josh Berkus
josh@agliodbs.com
In reply to: Robert Haas (#68)
Re: Core Extensions relocation

Tom,

I think that if we move a few things into src/extension and set things
up such that they get installed even if you just do "make install"
rather than requiring "make install-world", packagers who don't have
any terribly strong personal agenda will decide that means they ought
to be shipped with the server. However, if you're personally
committed to making sure that all of that stuff remains in
postgresql-contrib in Red Hat/Fedora, regardless of where we move it
to on our end, then that's where it's going to be, at least on all Red
Hat-derived systems, which is a big enough chunk of the world to
matter quite a lot. Note that I'm not necessarily saying anything
about whether your reasons for such a decision might be good or bad;
I'm just pointing out that a good deal of our ability to make a change
in this area is within your personal control.

Any response to this before I take it to the other packagers?

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

#70Josh Berkus
josh@agliodbs.com
In reply to: Josh Berkus (#69)
Re: Core Extensions relocation

All,

This is currently awaiting a check by gsmith that the 7 named extensions
do not add any new dependancies.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

#71Josh Berkus
josh@agliodbs.com
In reply to: Josh Berkus (#70)
Re: Core Extensions relocation

Greg,

This is currently awaiting a check by gsmith that the 7 named extensions
do not add any new dependancies.

Are you going to investigate this? If not, I'll give it a try this weekend.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

#72Josh Berkus
josh@agliodbs.com
In reply to: Josh Berkus (#71)
Re: Core Extensions relocation

All,

Andrew ran crake on these modules, and they do not add any links not
added by core postgres already.

As such, can we proceed with this patch? Greg, do you have an updated
version to run against HEAD?

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com