Refactoring of heapam code.

Started by Anastasia Lubennikovaover 9 years ago15 messages
#1Anastasia Lubennikova
a.lubennikova@postgrespro.ru
2 attachment(s)

Working on page compression and some other issues related to
access methods, I found out that the code related to heap
looks too complicated. Much more complicated, than it should be.
Since I anyway got into this area, I want to suggest a set of improvements.

There is a number of problems I see in the existing code:
Problem 1. Heap != Relation.

File heapam.c has a note:
* This file contains the heap_ routines which implement
* the POSTGRES heap access method used for all POSTGRES
* relations.
This statement is wrong, since we also have index relations,
that are implemented using other access methods.

Also I think that it's quite strange to have a function heap_create(),
that is called
from index_create(). It has absolutely nothing to do with heap access
method.

And so on, and so on.

One more thing that requires refactoring is "RELKIND_RELATION" name.
We have a type of relation called "relation"...

Problem 2.
Some functions are shared between heap and indexes access methods.
Maybe someday it used to be handy, but now, I think, it's time to separate
them, because IndexTuples and HeapTuples have really little in common.

Problem 3. The word "heap" is used in many different contexts.
Heap - a storage where we have tuples and visibility information.
Generally speaking, it can be implemented over any data structure,
not just a plain table as it is done now.

Heap - an access method, that provides a set of routines for plain table
storage, such as insert, delete, update, gettuple, vacuum and so on.
We use this access method not only for user's tables.

There are many types of relations (pg_class.h):
#define RELKIND_RELATION 'r' /* ordinary table */
#define RELKIND_INDEX 'i' /* secondary index */
#define RELKIND_SEQUENCE 'S' /* sequence object */
#define RELKIND_TOASTVALUE 't' /* for out-of-line
values */
#define RELKIND_VIEW 'v' /* view */
#define RELKIND_COMPOSITE_TYPE 'c' /* composite type */
#define RELKIND_FOREIGN_TABLE 'f' /* foreign table */
#define RELKIND_MATVIEW 'm' /* materialized view */

They can be logically separated into three categories:
"primary storage" - r, S, t, v. They store data and visibility information.
The only implementation is heapam.c
"secondary index" - i. They store some data and pointers to primary storage.
Various existing AMs and managed by AM API.
"virtual relations" - c, f, m. They have no physical storage, only entries
in caches and catalogs.

Now, for some reasons, we sometimes use name "heap" for both
"primary storage" and "virual relations". See src/backend/catalog/heap.c.
But some functions work only with "primary storage". See pgstat_relation().
And we constantly have to check relkind everywhere.
I'd like it would be clear from the code interface and naming.

So there is a couple of patches. They do not cover all mentioned problems,
but I'd like to get a feedback before continuing.

Patch 1:
There is a macro "PageGetItem" in bufpage.h that is used to retrieve an item
from the given page. It is used for both heap and index tuples.
It doesn't seems a problem, until you are going to find anything in this
code.

The first patch "PageGetItem_refactoring.patch" is intended to fix it.
It is just renaming:
(IndexTuple) PageGetItem ---> PageGetItemIndex
(HeapTupleHeader) PageGetItem ---> PageGetItemHeap
Other types of tuples (BrinTuple, SpGistInnerTuple, SpGistLeafTuple,
SpGistDeadTuple)
still use PageGetItem.

I also changed functions that do not access tuple's data, but only need
HeapTupleHeader fields. They use a macro PageGetItemHeapHeaderOnly.

I do not insist on these particular names, by the way.

Patch 2.
heapam.c, hio.c and src/backend/catalog/heap.c have a lot of confusing names
and outdated comments. The patch is intended to improve it.
Fun fact: I found several "check it later" comments unchanged since
"Postgres95 1.01 Distribution - Virgin Sources" commit.

I wonder if we can wind better name relation_drop_with_catalog() since,
again, it's
not about all kinds of relations. We use another function to drop index
relations.

I hope these changes will be useful for the future development.
Suggested patches are mostly about renaming and rearrangement of functions
between files, so, if nobody has conceptual objections, all I need from
reviewers
is an attentive look to find typos, grammar mistakes and overlooked areas.

--
Anastasia Lubennikova
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company

Attachments:

p1_pagegetitem_refactoring.patchtext/x-patch; name=p1_pagegetitem_refactoring.patchDownload
diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
index 4983bbb..257b609 100644
--- a/contrib/pageinspect/btreefuncs.c
+++ b/contrib/pageinspect/btreefuncs.c
@@ -130,7 +130,7 @@ GetBTPageStatistics(BlockNumber blkno, Buffer buffer, BTPageStat *stat)
 
 		ItemId		id = PageGetItemId(page, off);
 
-		itup = (IndexTuple) PageGetItem(page, id);
+		itup = PageGetItemIndex(page, id);
 
 		item_size += IndexTupleSize(itup);
 
@@ -355,7 +355,7 @@ bt_page_items(PG_FUNCTION_ARGS)
 		if (!ItemIdIsValid(id))
 			elog(ERROR, "invalid ItemId");
 
-		itup = (IndexTuple) PageGetItem(uargs->page, id);
+		itup = PageGetItemIndex(uargs->page, id);
 
 		j = 0;
 		values[j++] = psprintf("%d", uargs->offset);
diff --git a/contrib/pageinspect/heapfuncs.c b/contrib/pageinspect/heapfuncs.c
index 904eaef..98d4894 100644
--- a/contrib/pageinspect/heapfuncs.c
+++ b/contrib/pageinspect/heapfuncs.c
@@ -198,7 +198,7 @@ heap_page_items(PG_FUNCTION_ARGS)
 
 			/* Extract information from the tuple header */
 
-			tuphdr = (HeapTupleHeader) PageGetItem(page, id);
+			tuphdr = PageGetItemHeap(page, id);
 
 			values[4] = UInt32GetDatum(HeapTupleHeaderGetRawXmin(tuphdr));
 			values[5] = UInt32GetDatum(HeapTupleHeaderGetRawXmax(tuphdr));
diff --git a/contrib/pg_visibility/pg_visibility.c b/contrib/pg_visibility/pg_visibility.c
index 7034066..ab068e6 100644
--- a/contrib/pg_visibility/pg_visibility.c
+++ b/contrib/pg_visibility/pg_visibility.c
@@ -619,7 +619,7 @@ collect_corrupt_items(Oid relid, bool all_visible, bool all_frozen)
 			}
 
 			/* Initialize a HeapTupleData structure for checks below. */
-			tuple.t_data = (HeapTupleHeader) PageGetItem(page, itemid);
+			tuple.t_data = PageGetItemHeap(page, itemid);
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = relid;
 
diff --git a/contrib/pgstattuple/pgstatapprox.c b/contrib/pgstattuple/pgstatapprox.c
index a49ff54..8dc01e9 100644
--- a/contrib/pgstattuple/pgstatapprox.c
+++ b/contrib/pgstattuple/pgstatapprox.c
@@ -145,7 +145,7 @@ statapprox_heap(Relation rel, output_type *stat)
 
 			ItemPointerSet(&(tuple.t_self), blkno, offnum);
 
-			tuple.t_data = (HeapTupleHeader) PageGetItem(page, itemid);
+			tuple.t_data = PageGetItemHeap(page, itemid);
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(rel);
 
diff --git a/src/backend/access/gin/ginentrypage.c b/src/backend/access/gin/ginentrypage.c
index 8c0bfe9..0ec9c2d 100644
--- a/src/backend/access/gin/ginentrypage.c
+++ b/src/backend/access/gin/ginentrypage.c
@@ -236,7 +236,7 @@ getRightMostTuple(Page page)
 {
 	OffsetNumber maxoff = PageGetMaxOffsetNumber(page);
 
-	return (IndexTuple) PageGetItem(page, PageGetItemId(page, maxoff));
+	return PageGetItemIndex(page, PageGetItemId(page, maxoff));
 }
 
 static bool
@@ -307,7 +307,7 @@ entryLocateEntry(GinBtree btree, GinBtreeStack *stack)
 			Datum		key;
 			GinNullCategory category;
 
-			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, mid));
+			itup = PageGetItemIndex(page, PageGetItemId(page, mid));
 			attnum = gintuple_get_attrnum(btree->ginstate, itup);
 			key = gintuple_get_key(btree->ginstate, itup, &category);
 			result = ginCompareAttEntries(btree->ginstate,
@@ -332,7 +332,7 @@ entryLocateEntry(GinBtree btree, GinBtreeStack *stack)
 	Assert(high >= FirstOffsetNumber && high <= maxoff);
 
 	stack->off = high;
-	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, high));
+	itup = PageGetItemIndex(page, PageGetItemId(page, high));
 	Assert(GinGetDownlink(itup) != GIN_ROOT_BLKNO);
 	return GinGetDownlink(itup);
 }
@@ -378,7 +378,7 @@ entryLocateLeafEntry(GinBtree btree, GinBtreeStack *stack)
 		GinNullCategory category;
 		int			result;
 
-		itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, mid));
+		itup = PageGetItemIndex(page, PageGetItemId(page, mid));
 		attnum = gintuple_get_attrnum(btree->ginstate, itup);
 		key = gintuple_get_key(btree->ginstate, itup, &category);
 		result = ginCompareAttEntries(btree->ginstate,
@@ -414,7 +414,7 @@ entryFindChildPtr(GinBtree btree, Page page, BlockNumber blkno, OffsetNumber sto
 	/* if page isn't changed, we returns storedOff */
 	if (storedOff >= FirstOffsetNumber && storedOff <= maxoff)
 	{
-		itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, storedOff));
+		itup = PageGetItemIndex(page, PageGetItemId(page, storedOff));
 		if (GinGetDownlink(itup) == blkno)
 			return storedOff;
 
@@ -424,7 +424,7 @@ entryFindChildPtr(GinBtree btree, Page page, BlockNumber blkno, OffsetNumber sto
 		 */
 		for (i = storedOff + 1; i <= maxoff; i++)
 		{
-			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, i));
+			itup = PageGetItemIndex(page, PageGetItemId(page, i));
 			if (GinGetDownlink(itup) == blkno)
 				return i;
 		}
@@ -434,7 +434,7 @@ entryFindChildPtr(GinBtree btree, Page page, BlockNumber blkno, OffsetNumber sto
 	/* last chance */
 	for (i = FirstOffsetNumber; i <= maxoff; i++)
 	{
-		itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, i));
+		itup = PageGetItemIndex(page, PageGetItemId(page, i));
 		if (GinGetDownlink(itup) == blkno)
 			return i;
 	}
@@ -451,7 +451,7 @@ entryGetLeftMostPage(GinBtree btree, Page page)
 	Assert(!GinPageIsData(page));
 	Assert(PageGetMaxOffsetNumber(page) >= FirstOffsetNumber);
 
-	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, FirstOffsetNumber));
+	itup = PageGetItemIndex(page, PageGetItemId(page, FirstOffsetNumber));
 	return GinGetDownlink(itup);
 }
 
@@ -468,7 +468,7 @@ entryIsEnoughSpace(GinBtree btree, Buffer buf, OffsetNumber off,
 
 	if (insertData->isDelete)
 	{
-		IndexTuple	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, off));
+		IndexTuple	itup = PageGetItemIndex(page, PageGetItemId(page, off));
 
 		releasedsz = MAXALIGN(IndexTupleSize(itup)) + sizeof(ItemIdData);
 	}
@@ -501,7 +501,7 @@ entryPreparePage(GinBtree btree, Page page, OffsetNumber off,
 
 	if (!GinPageIsLeaf(page) && updateblkno != InvalidBlockNumber)
 	{
-		IndexTuple	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, off));
+		IndexTuple	itup = PageGetItemIndex(page, PageGetItemId(page, off));
 
 		GinSetDownlink(itup, updateblkno);
 	}
@@ -635,7 +635,7 @@ entrySplitPage(GinBtree btree, Buffer origbuf,
 			totalsize += size + sizeof(ItemIdData);
 		}
 
-		itup = (IndexTuple) PageGetItem(lpage, PageGetItemId(lpage, i));
+		itup = PageGetItemIndex(lpage, PageGetItemId(lpage, i));
 		size = MAXALIGN(IndexTupleSize(itup));
 		memcpy(ptr, itup, size);
 		ptr += size;
diff --git a/src/backend/access/gin/ginfast.c b/src/backend/access/gin/ginfast.c
index 59a63f2..42a9c1a 100644
--- a/src/backend/access/gin/ginfast.c
+++ b/src/backend/access/gin/ginfast.c
@@ -671,7 +671,7 @@ processPendingPage(BuildAccumulator *accum, KeyArray *ka,
 
 	for (i = startoff; i <= maxoff; i = OffsetNumberNext(i))
 	{
-		IndexTuple	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, i));
+		IndexTuple	itup = PageGetItemIndex(page, PageGetItemId(page, i));
 		OffsetNumber curattnum;
 		Datum		curkey;
 		GinNullCategory curcategory;
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index 9ed9fd2..296316e 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -146,7 +146,7 @@ collectMatchBitmap(GinBtreeData *btree, GinBtreeStack *stack,
 
 		page = BufferGetPage(stack->buffer);
 		TestForOldSnapshot(snapshot, btree->index, page);
-		itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, stack->off));
+		itup = PageGetItemIndex(page, PageGetItemId(page, stack->off));
 
 		/*
 		 * If tuple stores another attribute then stop scan
@@ -254,7 +254,7 @@ collectMatchBitmap(GinBtreeData *btree, GinBtreeStack *stack,
 					elog(ERROR, "lost saved point in index");	/* must not happen !!! */
 
 				page = BufferGetPage(stack->buffer);
-				itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, stack->off));
+				itup = PageGetItemIndex(page, PageGetItemId(page, stack->off));
 
 				if (gintuple_get_attrnum(btree->ginstate, itup) != attnum)
 					elog(ERROR, "lost saved point in index");	/* must not happen !!! */
@@ -368,7 +368,7 @@ restartScanEntry:
 	}
 	else if (btreeEntry.findItem(&btreeEntry, stackEntry))
 	{
-		IndexTuple	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, stackEntry->off));
+		IndexTuple	itup = PageGetItemIndex(page, PageGetItemId(page, stackEntry->off));
 
 		if (GinIsPostingTree(itup))
 		{
@@ -1372,7 +1372,7 @@ scanGetCandidate(IndexScanDesc scan, pendingPosition *pos)
 		}
 		else
 		{
-			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, pos->firstOffset));
+			itup = PageGetItemIndex(page, PageGetItemId(page, pos->firstOffset));
 			pos->item = itup->t_tid;
 			if (GinPageHasFullRow(page))
 			{
@@ -1381,7 +1381,7 @@ scanGetCandidate(IndexScanDesc scan, pendingPosition *pos)
 				 */
 				for (pos->lastOffset = pos->firstOffset + 1; pos->lastOffset <= maxoff; pos->lastOffset++)
 				{
-					itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, pos->lastOffset));
+					itup = PageGetItemIndex(page, PageGetItemId(page, pos->lastOffset));
 					if (!ItemPointerEquals(&pos->item, &itup->t_tid))
 						break;
 				}
@@ -1432,7 +1432,7 @@ matchPartialInPendingList(GinState *ginstate, Page page,
 
 	while (off < maxoff)
 	{
-		itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, off));
+		itup = PageGetItemIndex(page, PageGetItemId(page, off));
 
 		if (gintuple_get_attrnum(ginstate, itup) != entry->attnum)
 			return false;
@@ -1549,7 +1549,7 @@ collectMatchesForHeapRow(IndexScanDesc scan, pendingPosition *pos)
 
 					StopMiddle = StopLow + ((StopHigh - StopLow) >> 1);
 
-					itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, StopMiddle));
+					itup = PageGetItemIndex(page, PageGetItemId(page, StopMiddle));
 
 					attrnum = gintuple_get_attrnum(&so->ginstate, itup);
 
diff --git a/src/backend/access/gin/gininsert.c b/src/backend/access/gin/gininsert.c
index 9f784bf..4fbe6fa 100644
--- a/src/backend/access/gin/gininsert.c
+++ b/src/backend/access/gin/gininsert.c
@@ -198,7 +198,7 @@ ginEntryInsert(GinState *ginstate,
 	if (btree.findItem(&btree, stack))
 	{
 		/* found pre-existing entry */
-		itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, stack->off));
+		itup = PageGetItemIndex(page, PageGetItemId(page, stack->off));
 
 		if (GinIsPostingTree(itup))
 		{
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index c258478..6badf19 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -418,7 +418,7 @@ ginVacuumEntryPage(GinVacuumState *gvs, Buffer buffer, BlockNumber *roots, uint3
 
 	for (i = FirstOffsetNumber; i <= maxoff; i++)
 	{
-		IndexTuple	itup = (IndexTuple) PageGetItem(tmppage, PageGetItemId(tmppage, i));
+		IndexTuple	itup = PageGetItemIndex(tmppage, PageGetItemId(tmppage, i));
 
 		if (GinIsPostingTree(itup))
 		{
@@ -488,7 +488,7 @@ ginVacuumEntryPage(GinVacuumState *gvs, Buffer buffer, BlockNumber *roots, uint3
 					tmppage = PageGetTempPageCopy(origpage);
 
 					/* set itup pointer to new page */
-					itup = (IndexTuple) PageGetItem(tmppage, PageGetItemId(tmppage, i));
+					itup = PageGetItemIndex(tmppage, PageGetItemId(tmppage, i));
 				}
 
 				attnum = gintuple_get_attrnum(&gvs->ginstate, itup);
@@ -580,7 +580,7 @@ ginbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 
 		Assert(PageGetMaxOffsetNumber(page) >= FirstOffsetNumber);
 
-		itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, FirstOffsetNumber));
+		itup = PageGetItemIndex(page, PageGetItemId(page, FirstOffsetNumber));
 		blkno = GinGetDownlink(itup);
 		Assert(blkno != InvalidBlockNumber);
 
diff --git a/src/backend/access/gin/ginxlog.c b/src/backend/access/gin/ginxlog.c
index b4d310f..e4a3d03 100644
--- a/src/backend/access/gin/ginxlog.c
+++ b/src/backend/access/gin/ginxlog.c
@@ -108,7 +108,7 @@ ginRedoInsertEntry(Buffer buffer, bool isLeaf, BlockNumber rightblkno, void *rda
 		/* update link to right page after split */
 		Assert(!GinPageIsLeaf(page));
 		Assert(offset >= FirstOffsetNumber && offset <= PageGetMaxOffsetNumber(page));
-		itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offset));
+		itup = PageGetItemIndex(page, PageGetItemId(page, offset));
 		GinSetDownlink(itup, rightblkno);
 	}
 
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index e8034b9..e2fe4e7 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -670,7 +670,7 @@ gistdoinsert(Relation r, IndexTuple itup, Size freespace, GISTSTATE *giststate)
 
 			downlinkoffnum = gistchoose(state.r, stack->page, itup, giststate);
 			iid = PageGetItemId(stack->page, downlinkoffnum);
-			idxtuple = (IndexTuple) PageGetItem(stack->page, iid);
+			idxtuple = PageGetItemIndex(stack->page, iid);
 			childblkno = ItemPointerGetBlockNumber(&(idxtuple->t_tid));
 
 			/*
@@ -901,7 +901,7 @@ gistFindPath(Relation r, BlockNumber child, OffsetNumber *downlinkoffnum)
 		for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i))
 		{
 			iid = PageGetItemId(page, i);
-			idxtuple = (IndexTuple) PageGetItem(page, iid);
+			idxtuple = PageGetItemIndex(page, iid);
 			blkno = ItemPointerGetBlockNumber(&(idxtuple->t_tid));
 			if (blkno == child)
 			{
@@ -960,7 +960,7 @@ gistFindCorrectParent(Relation r, GISTInsertStack *child)
 			for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i))
 			{
 				iid = PageGetItemId(parent->page, i);
-				idxtuple = (IndexTuple) PageGetItem(parent->page, iid);
+				idxtuple = PageGetItemIndex(parent->page, iid);
 				if (ItemPointerGetBlockNumber(&(idxtuple->t_tid)) == child->blkno)
 				{
 					/* yes!!, found */
@@ -1036,8 +1036,7 @@ gistformdownlink(Relation rel, Buffer buf, GISTSTATE *giststate,
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offset = FirstOffsetNumber; offset <= maxoff; offset = OffsetNumberNext(offset))
 	{
-		IndexTuple	ituple = (IndexTuple)
-		PageGetItem(page, PageGetItemId(page, offset));
+		IndexTuple	ituple = PageGetItemIndex(page, PageGetItemId(page, offset));
 
 		if (downlink == NULL)
 			downlink = CopyIndexTuple(ituple);
@@ -1069,7 +1068,7 @@ gistformdownlink(Relation rel, Buffer buf, GISTSTATE *giststate,
 		LockBuffer(stack->parent->buffer, GIST_EXCLUSIVE);
 		gistFindCorrectParent(rel, stack);
 		iid = PageGetItemId(stack->parent->page, stack->downlinkoffnum);
-		downlink = (IndexTuple) PageGetItem(stack->parent->page, iid);
+		downlink = PageGetItemIndex(stack->parent->page, iid);
 		downlink = CopyIndexTuple(downlink);
 		LockBuffer(stack->parent->buffer, GIST_UNLOCK);
 	}
diff --git a/src/backend/access/gist/gistbuild.c b/src/backend/access/gist/gistbuild.c
index 4e43a69..e361cd2 100644
--- a/src/backend/access/gist/gistbuild.c
+++ b/src/backend/access/gist/gistbuild.c
@@ -592,7 +592,7 @@ gistProcessItup(GISTBuildState *buildstate, IndexTuple itup,
 		page = (Page) BufferGetPage(buffer);
 		childoffnum = gistchoose(indexrel, page, itup, giststate);
 		iid = PageGetItemId(page, childoffnum);
-		idxtuple = (IndexTuple) PageGetItem(page, iid);
+		idxtuple = PageGetItemIndex(page, iid);
 		childblkno = ItemPointerGetBlockNumber(&(idxtuple->t_tid));
 
 		if (level > 1)
@@ -719,7 +719,7 @@ gistbufferinginserttuples(GISTBuildState *buildstate, Buffer buffer, int level,
 			for (off = FirstOffsetNumber; off <= maxoff; off++)
 			{
 				ItemId		iid = PageGetItemId(page, off);
-				IndexTuple	idxtuple = (IndexTuple) PageGetItem(page, iid);
+				IndexTuple	idxtuple = PageGetItemIndex(page, iid);
 				BlockNumber childblkno = ItemPointerGetBlockNumber(&(idxtuple->t_tid));
 				Buffer		childbuf = ReadBuffer(buildstate->indexrel, childblkno);
 
@@ -876,7 +876,7 @@ gistBufferingFindCorrectParent(GISTBuildState *buildstate,
 		*downlinkoffnum != InvalidOffsetNumber && *downlinkoffnum <= maxoff)
 	{
 		ItemId		iid = PageGetItemId(page, *downlinkoffnum);
-		IndexTuple	idxtuple = (IndexTuple) PageGetItem(page, iid);
+		IndexTuple	idxtuple = PageGetItemIndex(page, iid);
 
 		if (ItemPointerGetBlockNumber(&(idxtuple->t_tid)) == childblkno)
 		{
@@ -895,7 +895,7 @@ gistBufferingFindCorrectParent(GISTBuildState *buildstate,
 	for (off = FirstOffsetNumber; off <= maxoff; off = OffsetNumberNext(off))
 	{
 		ItemId		iid = PageGetItemId(page, off);
-		IndexTuple	idxtuple = (IndexTuple) PageGetItem(page, iid);
+		IndexTuple	idxtuple = PageGetItemIndex(page, iid);
 
 		if (ItemPointerGetBlockNumber(&(idxtuple->t_tid)) == childblkno)
 		{
@@ -1081,7 +1081,7 @@ gistGetMaxLevel(Relation index)
 		 * matter which downlink we choose, the tree has the same depth
 		 * everywhere, so we just pick the first one.
 		 */
-		itup = (IndexTuple) PageGetItem(page,
+		itup = PageGetItemIndex(page,
 									 PageGetItemId(page, FirstOffsetNumber));
 		blkno = ItemPointerGetBlockNumber(&(itup->t_tid));
 		UnlockReleaseBuffer(buffer);
@@ -1175,7 +1175,7 @@ gistMemorizeAllDownlinks(GISTBuildState *buildstate, Buffer parentbuf)
 	for (off = FirstOffsetNumber; off <= maxoff; off++)
 	{
 		ItemId		iid = PageGetItemId(page, off);
-		IndexTuple	idxtuple = (IndexTuple) PageGetItem(page, iid);
+		IndexTuple	idxtuple = PageGetItemIndex(page, iid);
 		BlockNumber childblkno = ItemPointerGetBlockNumber(&(idxtuple->t_tid));
 
 		gistMemorizeParent(buildstate, childblkno, parentblkno);
diff --git a/src/backend/access/gist/gistget.c b/src/backend/access/gist/gistget.c
index 5ba7d0a..d8e5992 100644
--- a/src/backend/access/gist/gistget.c
+++ b/src/backend/access/gist/gistget.c
@@ -404,7 +404,7 @@ gistScanPage(IndexScanDesc scan, GISTSearchItem *pageItem, double *myDistances,
 		if (scan->ignore_killed_tuples && ItemIdIsDead(iid))
 			continue;
 
-		it = (IndexTuple) PageGetItem(page, iid);
+		it = PageGetItemIndex(page, iid);
 
 		/*
 		 * Must call gistindex_keytest in tempCxt, and clean up any leftover
diff --git a/src/backend/access/gist/gistutil.c b/src/backend/access/gist/gistutil.c
index fac166d..dc7cee8 100644
--- a/src/backend/access/gist/gistutil.c
+++ b/src/backend/access/gist/gistutil.c
@@ -62,7 +62,7 @@ gistnospace(Page page, IndexTuple *itvec, int len, OffsetNumber todelete, Size f
 
 	if (todelete != InvalidOffsetNumber)
 	{
-		IndexTuple	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, todelete));
+		IndexTuple	itup = PageGetItemIndex(page, PageGetItemId(page, todelete));
 
 		deleted = IndexTupleSize(itup) + sizeof(ItemIdData);
 	}
@@ -97,7 +97,7 @@ gistextractpage(Page page, int *len /* out */ )
 	*len = maxoff;
 	itvec = palloc(sizeof(IndexTuple) * maxoff);
 	for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i))
-		itvec[i - FirstOffsetNumber] = (IndexTuple) PageGetItem(page, PageGetItemId(page, i));
+		itvec[i - FirstOffsetNumber] = PageGetItemIndex(page, PageGetItemId(page, i));
 
 	return itvec;
 }
@@ -431,7 +431,7 @@ gistchoose(Relation r, Page p, IndexTuple it,	/* it has compressed entry */
 
 	for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i))
 	{
-		IndexTuple	itup = (IndexTuple) PageGetItem(p, PageGetItemId(p, i));
+		IndexTuple	itup = PageGetItemIndex(p, PageGetItemId(p, i));
 		bool		zero_penalty;
 		int			j;
 
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index 53e5cea..3abb570 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -200,7 +200,7 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 			for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i))
 			{
 				iid = PageGetItemId(page, i);
-				idxtuple = (IndexTuple) PageGetItem(page, iid);
+				idxtuple = PageGetItemIndex(page, iid);
 
 				if (callback(&(idxtuple->t_tid), callback_state))
 					todelete[ntodelete++] = i;
@@ -245,7 +245,7 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 			for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i))
 			{
 				iid = PageGetItemId(page, i);
-				idxtuple = (IndexTuple) PageGetItem(page, iid);
+				idxtuple = PageGetItemIndex(page, iid);
 
 				ptr = (GistBDItem *) palloc(sizeof(GistBDItem));
 				ptr->blkno = ItemPointerGetBlockNumber(&(idxtuple->t_tid));
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 30c82e1..6e1583c 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -302,7 +302,7 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 		{
 			IndexTuple	itup;
 
-			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
+			itup = PageGetItemIndex(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
 		}
@@ -588,7 +588,7 @@ loop_top:
 				IndexTuple	itup;
 				ItemPointer htup;
 
-				itup = (IndexTuple) PageGetItem(page,
+				itup = PageGetItemIndex(page,
 												PageGetItemId(page, offno));
 				htup = &(itup->t_tid);
 				if (callback(htup, callback_state))
diff --git a/src/backend/access/hash/hashovfl.c b/src/backend/access/hash/hashovfl.c
index db3e268..1f30b53 100644
--- a/src/backend/access/hash/hashovfl.c
+++ b/src/backend/access/hash/hashovfl.c
@@ -656,7 +656,7 @@ _hash_squeezebucket(Relation rel,
 			IndexTuple	itup;
 			Size		itemsz;
 
-			itup = (IndexTuple) PageGetItem(rpage,
+			itup = PageGetItemIndex(rpage,
 											PageGetItemId(rpage, roffnum));
 			itemsz = IndexTupleDSize(*itup);
 			itemsz = MAXALIGN(itemsz);
diff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c
index 178463f..5c20fcc 100644
--- a/src/backend/access/hash/hashpage.c
+++ b/src/backend/access/hash/hashpage.c
@@ -815,7 +815,7 @@ _hash_splitbucket(Relation rel,
 			 * Fetch the item's hash key (conveniently stored in the item) and
 			 * determine which bucket it now belongs in.
 			 */
-			itup = (IndexTuple) PageGetItem(opage,
+			itup = PageGetItemIndex(opage,
 											PageGetItemId(opage, ooffnum));
 			bucket = _hash_hashkey2bucket(_hash_get_indextuple_hashkey(itup),
 										  maxbucket, highmask, lowmask);
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 4825558..8d42868 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -56,7 +56,7 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	offnum = ItemPointerGetOffsetNumber(current);
 	_hash_checkpage(rel, buf, LH_BUCKET_PAGE | LH_OVERFLOW_PAGE);
 	page = BufferGetPage(buf);
-	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
+	itup = PageGetItemIndex(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
 	return true;
@@ -260,7 +260,7 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	offnum = ItemPointerGetOffsetNumber(current);
 	_hash_checkpage(rel, buf, LH_BUCKET_PAGE | LH_OVERFLOW_PAGE);
 	page = BufferGetPage(buf);
-	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
+	itup = PageGetItemIndex(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
 	return true;
@@ -337,7 +337,7 @@ _hash_step(IndexScanDesc scan, Buffer *bufP, ScanDirection dir)
 					if (offnum <= maxoff)
 					{
 						Assert(offnum >= FirstOffsetNumber);
-						itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
+						itup = PageGetItemIndex(page, PageGetItemId(page, offnum));
 						if (so->hashso_sk_hash == _hash_get_indextuple_hashkey(itup))
 							break;		/* yes, so exit for-loop */
 					}
@@ -378,7 +378,7 @@ _hash_step(IndexScanDesc scan, Buffer *bufP, ScanDirection dir)
 					if (offnum >= FirstOffsetNumber)
 					{
 						Assert(offnum <= maxoff);
-						itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
+						itup = PageGetItemIndex(page, PageGetItemId(page, offnum));
 						if (so->hashso_sk_hash == _hash_get_indextuple_hashkey(itup))
 							break;		/* yes, so exit for-loop */
 					}
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index 822862d..d1811f7 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -304,7 +304,7 @@ _hash_binsearch(Page page, uint32 hash_value)
 		off = (upper + lower) / 2;
 		Assert(OffsetNumberIsValid(off));
 
-		itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, off));
+		itup = PageGetItemIndex(page, PageGetItemId(page, off));
 		hashkey = _hash_get_indextuple_hashkey(itup);
 		if (hashkey < hash_value)
 			lower = off + 1;
@@ -342,7 +342,7 @@ _hash_binsearch_last(Page page, uint32 hash_value)
 		off = (upper + lower + 1) / 2;
 		Assert(OffsetNumberIsValid(off));
 
-		itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, off));
+		itup = PageGetItemIndex(page, PageGetItemId(page, off));
 		hashkey = _hash_get_indextuple_hashkey(itup);
 		if (hashkey > hash_value)
 			upper = off - 1;
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 38bba16..90a6937 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -431,7 +431,7 @@ heapgetpage(HeapScanDesc scan, BlockNumber page)
 			bool		valid;
 
 			loctup.t_tableOid = RelationGetRelid(scan->rs_rd);
-			loctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lpp);
+			loctup.t_data = PageGetItemHeapHeaderOnly((Page) dp, lpp);
 			loctup.t_len = ItemIdGetLength(lpp);
 			ItemPointerSet(&(loctup.t_self), page, lineoff);
 
@@ -625,7 +625,7 @@ heapgettup(HeapScanDesc scan,
 		lpp = PageGetItemId(dp, lineoff);
 		Assert(ItemIdIsNormal(lpp));
 
-		tuple->t_data = (HeapTupleHeader) PageGetItem((Page) dp, lpp);
+		tuple->t_data = PageGetItemHeap((Page) dp, lpp);
 		tuple->t_len = ItemIdGetLength(lpp);
 
 		return;
@@ -644,7 +644,7 @@ heapgettup(HeapScanDesc scan,
 			{
 				bool		valid;
 
-				tuple->t_data = (HeapTupleHeader) PageGetItem((Page) dp, lpp);
+				tuple->t_data = PageGetItemHeap((Page) dp, lpp);
 				tuple->t_len = ItemIdGetLength(lpp);
 				ItemPointerSet(&(tuple->t_self), page, lineoff);
 
@@ -921,7 +921,7 @@ heapgettup_pagemode(HeapScanDesc scan,
 		lpp = PageGetItemId(dp, lineoff);
 		Assert(ItemIdIsNormal(lpp));
 
-		tuple->t_data = (HeapTupleHeader) PageGetItem((Page) dp, lpp);
+		tuple->t_data = PageGetItemHeap((Page) dp, lpp);
 		tuple->t_len = ItemIdGetLength(lpp);
 
 		/* check that rs_cindex is in sync */
@@ -943,7 +943,7 @@ heapgettup_pagemode(HeapScanDesc scan,
 			lpp = PageGetItemId(dp, lineoff);
 			Assert(ItemIdIsNormal(lpp));
 
-			tuple->t_data = (HeapTupleHeader) PageGetItem((Page) dp, lpp);
+			tuple->t_data = PageGetItemHeap((Page) dp, lpp);
 			tuple->t_len = ItemIdGetLength(lpp);
 			ItemPointerSet(&(tuple->t_self), page, lineoff);
 
@@ -1915,7 +1915,7 @@ heap_fetch(Relation relation,
 	/*
 	 * fill in *tuple fields
 	 */
-	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
+	tuple->t_data = PageGetItemHeap(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
 
@@ -2030,7 +2030,7 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 		}
 
-		heapTuple->t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		heapTuple->t_data = PageGetItemHeap(dp, lp);
 		heapTuple->t_len = ItemIdGetLength(lp);
 		heapTuple->t_tableOid = RelationGetRelid(relation);
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
@@ -2221,7 +2221,7 @@ heap_get_latest_tid(Relation relation,
 
 		/* OK to access the tuple */
 		tp.t_self = ctid;
-		tp.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+		tp.t_data = PageGetItemHeapHeaderOnly(page, lp);
 		tp.t_len = ItemIdGetLength(lp);
 		tp.t_tableOid = RelationGetRelid(relation);
 
@@ -3048,7 +3048,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
-	tp.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+	tp.t_data = PageGetItemHeap(page, lp);
 	tp.t_len = ItemIdGetLength(lp);
 	tp.t_self = *tid;
 
@@ -3528,7 +3528,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
-	oldtup.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+	oldtup.t_data = PageGetItemHeap(page, lp);
 	oldtup.t_len = ItemIdGetLength(lp);
 	oldtup.t_self = *otid;
 
@@ -4598,7 +4598,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
 	Assert(ItemIdIsNormal(lp));
 
-	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
+	tuple->t_data = PageGetItemHeap(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
 
@@ -5960,7 +5960,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp))
 		elog(ERROR, "invalid lp");
 
-	htup = (HeapTupleHeader) PageGetItem(page, lp);
+	htup = PageGetItemHeapHeaderOnly(page, lp);
 
 	/* SpecTokenOffsetNumber should be distinguishable from any real offset */
 	StaticAssertStmt(MaxOffsetNumber < SpecTokenOffsetNumber,
@@ -6059,7 +6059,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
-	tp.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+	tp.t_data = PageGetItemHeapHeaderOnly(page, lp);
 	tp.t_len = ItemIdGetLength(lp);
 	tp.t_self = *tid;
 
@@ -6201,7 +6201,7 @@ heap_inplace_update(Relation relation, HeapTuple tuple)
 	if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp))
 		elog(ERROR, "invalid lp");
 
-	htup = (HeapTupleHeader) PageGetItem(page, lp);
+	htup = PageGetItemHeap(page, lp);
 
 	oldlen = ItemIdGetLength(lp) - htup->t_hoff;
 	newlen = tuple->t_len - tuple->t_data->t_hoff;
@@ -8120,7 +8120,7 @@ heap_xlog_freeze_page(XLogReaderState *record)
 
 			xlrec_tp = &tuples[ntup];
 			lp = PageGetItemId(page, xlrec_tp->offset); /* offsets are one-based */
-			tuple = (HeapTupleHeader) PageGetItem(page, lp);
+			tuple = PageGetItemHeapHeaderOnly(page, lp);
 
 			heap_execute_freeze_tuple(tuple, xlrec_tp);
 		}
@@ -8201,7 +8201,7 @@ heap_xlog_delete(XLogReaderState *record)
 		if (PageGetMaxOffsetNumber(page) < xlrec->offnum || !ItemIdIsNormal(lp))
 			elog(PANIC, "invalid lp");
 
-		htup = (HeapTupleHeader) PageGetItem(page, lp);
+		htup = PageGetItemHeapHeaderOnly(page, lp);
 
 		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
@@ -8567,7 +8567,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp))
 			elog(PANIC, "invalid lp");
 
-		htup = (HeapTupleHeader) PageGetItem(page, lp);
+		htup = PageGetItemHeap(page, lp);
 
 		oldtup.t_data = htup;
 		oldtup.t_len = ItemIdGetLength(lp);
@@ -8778,7 +8778,7 @@ heap_xlog_confirm(XLogReaderState *record)
 		if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp))
 			elog(PANIC, "invalid lp");
 
-		htup = (HeapTupleHeader) PageGetItem(page, lp);
+		htup = PageGetItemHeapHeaderOnly(page, lp);
 
 		/*
 		 * Confirm tuple as actually inserted
@@ -8835,7 +8835,7 @@ heap_xlog_lock(XLogReaderState *record)
 		if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp))
 			elog(PANIC, "invalid lp");
 
-		htup = (HeapTupleHeader) PageGetItem(page, lp);
+		htup = PageGetItemHeapHeaderOnly(page, lp);
 
 		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
@@ -8908,7 +8908,7 @@ heap_xlog_lock_updated(XLogReaderState *record)
 		if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp))
 			elog(PANIC, "invalid lp");
 
-		htup = (HeapTupleHeader) PageGetItem(page, lp);
+		htup = PageGetItemHeapHeaderOnly(page, lp);
 
 		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
@@ -8949,7 +8949,7 @@ heap_xlog_inplace(XLogReaderState *record)
 		if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp))
 			elog(PANIC, "invalid lp");
 
-		htup = (HeapTupleHeader) PageGetItem(page, lp);
+		htup = PageGetItemHeap(page, lp);
 
 		oldlen = ItemIdGetLength(lp) - htup->t_hoff;
 		if (oldlen != newlen)
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index c90fb71..e4ef0e1 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -66,10 +66,10 @@ RelationPutHeapTuple(Relation relation,
 	 */
 	if (!token)
 	{
-		ItemId		itemId = PageGetItemId(pageHeader, offnum);
-		Item		item = PageGetItem(pageHeader, itemId);
+		ItemId			itemId = PageGetItemId(pageHeader, offnum);
+		HeapTupleHeader	onpage_tup = PageGetItemHeapHeaderOnly(pageHeader, itemId);
 
-		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+		onpage_tup->t_ctid = tuple->t_self;
 	}
 }
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 6ff9251..3e07b10 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -377,7 +377,7 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 	 */
 	if (ItemIdIsNormal(rootlp))
 	{
-		htup = (HeapTupleHeader) PageGetItem(dp, rootlp);
+		htup = PageGetItemHeapHeaderOnly(dp, rootlp);
 
 		tup.t_data = htup;
 		tup.t_len = ItemIdGetLength(rootlp);
@@ -464,7 +464,7 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 			break;
 
 		Assert(ItemIdIsNormal(lp));
-		htup = (HeapTupleHeader) PageGetItem(dp, lp);
+		htup = PageGetItemHeapHeaderOnly(dp, lp);
 
 		tup.t_data = htup;
 		tup.t_len = ItemIdGetLength(lp);
@@ -762,7 +762,7 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 		if (ItemIdIsNormal(lp))
 		{
-			htup = (HeapTupleHeader) PageGetItem(page, lp);
+			htup = PageGetItemHeapHeaderOnly(page, lp);
 
 			/*
 			 * Check if this tuple is part of a HOT-chain rooted at some other
@@ -811,7 +811,7 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 			if (!ItemIdIsNormal(lp))
 				break;
 
-			htup = (HeapTupleHeader) PageGetItem(page, lp);
+			htup = PageGetItemHeapHeaderOnly(page, lp);
 
 			if (TransactionIdIsValid(priorXmax) &&
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index f9ce986..660925e 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -731,7 +731,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		HeapTupleHeader onpage_tup;
 
 		newitemid = PageGetItemId(page, newoff);
-		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
+		onpage_tup = PageGetItemHeapHeaderOnly(page, newitemid);
 
 		onpage_tup->t_ctid = tup->t_self;
 	}
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index ef69290..0005ac3 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -305,7 +305,7 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 					break;		/* we're past all the equal tuples */
 
 				/* okay, we gotta fetch the heap tuple ... */
-				curitup = (IndexTuple) PageGetItem(page, curitemid);
+				curitup = PageGetItemIndex(page, curitemid);
 				htid = curitup->t_tid;
 
 				/*
@@ -1050,7 +1050,7 @@ _bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
 	{
 		itemid = PageGetItemId(origpage, P_HIKEY);
 		itemsz = ItemIdGetLength(itemid);
-		item = (IndexTuple) PageGetItem(origpage, itemid);
+		item = PageGetItemIndex(origpage, itemid);
 		if (PageAddItem(rightpage, (Item) item, itemsz, rightoff,
 						false, false) == InvalidOffsetNumber)
 		{
@@ -1079,7 +1079,7 @@ _bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
 		/* existing item at firstright will become first on right page */
 		itemid = PageGetItemId(origpage, firstright);
 		itemsz = ItemIdGetLength(itemid);
-		item = (IndexTuple) PageGetItem(origpage, itemid);
+		item = PageGetItemIndex(origpage, itemid);
 	}
 	if (PageAddItem(leftpage, (Item) item, itemsz, leftoff,
 					false, false) == InvalidOffsetNumber)
@@ -1103,7 +1103,7 @@ _bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
 	{
 		itemid = PageGetItemId(origpage, i);
 		itemsz = ItemIdGetLength(itemid);
-		item = (IndexTuple) PageGetItem(origpage, itemid);
+		item = PageGetItemIndex(origpage, itemid);
 
 		/* does new item belong before this one? */
 		if (i == newitemoff)
@@ -1309,7 +1309,7 @@ _bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
 			 * page.
 			 */
 			itemid = PageGetItemId(origpage, P_HIKEY);
-			item = (IndexTuple) PageGetItem(origpage, itemid);
+			item = PageGetItemIndex(origpage, itemid);
 			XLogRegisterBufData(0, (char *) item, MAXALIGN(IndexTupleSize(item)));
 		}
 
@@ -1692,7 +1692,7 @@ _bt_insert_parent(Relation rel,
 		}
 
 		/* get high key from left page == lowest key on new right page */
-		ritem = (IndexTuple) PageGetItem(page,
+		ritem = PageGetItemIndex(page,
 										 PageGetItemId(page, P_HIKEY));
 
 		/* form an index tuple that points at the new right page */
@@ -1860,7 +1860,7 @@ _bt_getstackbuf(Relation rel, BTStack stack, int access)
 				 offnum = OffsetNumberNext(offnum))
 			{
 				itemid = PageGetItemId(page, offnum);
-				item = (IndexTuple) PageGetItem(page, itemid);
+				item = PageGetItemIndex(page, itemid);
 				if (BTEntrySame(item, &stack->bts_btentry))
 				{
 					/* Return accurate pointer to where link is now */
@@ -1875,7 +1875,7 @@ _bt_getstackbuf(Relation rel, BTStack stack, int access)
 				 offnum = OffsetNumberPrev(offnum))
 			{
 				itemid = PageGetItemId(page, offnum);
-				item = (IndexTuple) PageGetItem(page, itemid);
+				item = PageGetItemIndex(page, itemid);
 				if (BTEntrySame(item, &stack->bts_btentry))
 				{
 					/* Return accurate pointer to where link is now */
@@ -1970,7 +1970,7 @@ _bt_newroot(Relation rel, Buffer lbuf, Buffer rbuf)
 	 */
 	itemid = PageGetItemId(lpage, P_HIKEY);
 	right_item_sz = ItemIdGetLength(itemid);
-	item = (IndexTuple) PageGetItem(lpage, itemid);
+	item = PageGetItemIndex(lpage, itemid);
 	right_item = CopyIndexTuple(item);
 	ItemPointerSet(&(right_item->t_tid), rbkno, P_HIKEY);
 
@@ -2128,7 +2128,7 @@ _bt_isequal(TupleDesc itupdesc, Page page, OffsetNumber offnum,
 	/* Better be comparing to a leaf item */
 	Assert(P_ISLEAF((BTPageOpaque) PageGetSpecialPointer(page)));
 
-	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
+	itup = PageGetItemIndex(page, PageGetItemId(page, offnum));
 
 	for (i = 1; i <= keysz; i++)
 	{
diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c
index 390bd1a..99ad72d 100644
--- a/src/backend/access/nbtree/nbtpage.c
+++ b/src/backend/access/nbtree/nbtpage.c
@@ -1209,7 +1209,7 @@ _bt_pagedel(Relation rel, Buffer buf)
 				BlockNumber leftsib;
 
 				itemid = PageGetItemId(page, P_HIKEY);
-				targetkey = CopyIndexTuple((IndexTuple) PageGetItem(page, itemid));
+				targetkey = CopyIndexTuple(PageGetItemIndex(page, itemid));
 
 				leftsib = opaque->btpo_prev;
 
@@ -1390,13 +1390,13 @@ _bt_mark_page_halfdead(Relation rel, Buffer leafbuf, BTStack stack)
 
 #ifdef USE_ASSERT_CHECKING
 	itemid = PageGetItemId(page, topoff);
-	itup = (IndexTuple) PageGetItem(page, itemid);
+	itup = PageGetItemIndex(page, itemid);
 	Assert(ItemPointerGetBlockNumber(&(itup->t_tid)) == target);
 #endif
 
 	nextoffset = OffsetNumberNext(topoff);
 	itemid = PageGetItemId(page, nextoffset);
-	itup = (IndexTuple) PageGetItem(page, itemid);
+	itup = PageGetItemIndex(page, itemid);
 	if (ItemPointerGetBlockNumber(&(itup->t_tid)) != rightsib)
 		elog(ERROR, "right sibling %u of block %u is not next child %u of block %u in index \"%s\"",
 			 rightsib, target, ItemPointerGetBlockNumber(&(itup->t_tid)),
@@ -1421,7 +1421,7 @@ _bt_mark_page_halfdead(Relation rel, Buffer leafbuf, BTStack stack)
 	opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 
 	itemid = PageGetItemId(page, topoff);
-	itup = (IndexTuple) PageGetItem(page, itemid);
+	itup = PageGetItemIndex(page, itemid);
 	ItemPointerSet(&(itup->t_tid), rightsib, P_HIKEY);
 
 	nextoffset = OffsetNumberNext(topoff);
@@ -1534,7 +1534,7 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 	 * Remember some information about the leaf page.
 	 */
 	itemid = PageGetItemId(page, P_HIKEY);
-	leafhikey = &((IndexTuple) PageGetItem(page, itemid))->t_tid;
+	leafhikey = &(PageGetItemIndex(page, itemid))->t_tid;
 	leafleftsib = opaque->btpo_prev;
 	leafrightsib = opaque->btpo_next;
 
@@ -1650,7 +1650,7 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 		/* remember the next non-leaf child down in the branch. */
 		itemid = PageGetItemId(page, P_FIRSTDATAKEY(opaque));
-		nextchild = ItemPointerGetBlockNumber(&((IndexTuple) PageGetItem(page, itemid))->t_tid);
+		nextchild = ItemPointerGetBlockNumber(&(PageGetItemIndex(page, itemid))->t_tid);
 		if (nextchild == leafblkno)
 			nextchild = InvalidBlockNumber;
 	}
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 1f47973..78af1e7 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -988,7 +988,7 @@ restart:
 				IndexTuple	itup;
 				ItemPointer htup;
 
-				itup = (IndexTuple) PageGetItem(page,
+				itup = PageGetItemIndex(page,
 												PageGetItemId(page, offnum));
 				htup = &(itup->t_tid);
 
diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c
index ee46023..43b563c 100644
--- a/src/backend/access/nbtree/nbtsearch.c
+++ b/src/backend/access/nbtree/nbtsearch.c
@@ -142,7 +142,7 @@ _bt_search(Relation rel, int keysz, ScanKey scankey, bool nextkey,
 		 */
 		offnum = _bt_binsrch(rel, *bufP, keysz, scankey, nextkey);
 		itemid = PageGetItemId(page, offnum);
-		itup = (IndexTuple) PageGetItem(page, itemid);
+		itup = PageGetItemIndex(page, itemid);
 		blkno = ItemPointerGetBlockNumber(&(itup->t_tid));
 		par_blkno = BufferGetBlockNumber(*bufP);
 
@@ -439,7 +439,7 @@ _bt_compare(Relation rel,
 	if (!P_ISLEAF(opaque) && offnum == P_FIRSTDATAKEY(opaque))
 		return 1;
 
-	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
+	itup = PageGetItemIndex(page, PageGetItemId(page, offnum));
 
 	/*
 	 * The scan key is set up with the attribute number associated with each
@@ -1636,7 +1636,7 @@ _bt_get_endpoint(Relation rel, uint32 level, bool rightmost,
 		else
 			offnum = P_FIRSTDATAKEY(opaque);
 
-		itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
+		itup = PageGetItemIndex(page, PageGetItemId(page, offnum));
 		blkno = ItemPointerGetBlockNumber(&(itup->t_tid));
 
 		buf = _bt_relandgetbuf(rel, buf, blkno, BT_READ);
diff --git a/src/backend/access/nbtree/nbtsort.c b/src/backend/access/nbtree/nbtsort.c
index 99a014e..82bf6f1 100644
--- a/src/backend/access/nbtree/nbtsort.c
+++ b/src/backend/access/nbtree/nbtsort.c
@@ -526,7 +526,7 @@ _bt_buildadd(BTWriteState *wstate, BTPageState *state, IndexTuple itup)
 		 */
 		Assert(last_off > P_FIRSTKEY);
 		ii = PageGetItemId(opage, last_off);
-		oitup = (IndexTuple) PageGetItem(opage, ii);
+		oitup = PageGetItemIndex(opage, ii);
 		_bt_sortaddtup(npage, ItemIdGetLength(ii), oitup, P_FIRSTKEY);
 
 		/*
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 83c553c..c2537d8 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -1402,7 +1402,7 @@ _bt_checkkeys(IndexScanDesc scan,
 	else
 		tuple_alive = true;
 
-	tuple = (IndexTuple) PageGetItem(page, iid);
+	tuple = PageGetItemIndex(page, iid);
 
 	tupdesc = RelationGetDescr(scan->indexRelation);
 	so = (BTScanOpaque) scan->opaque;
@@ -1797,7 +1797,7 @@ _bt_killitems(IndexScanDesc scan)
 		while (offnum <= maxoff)
 		{
 			ItemId		iid = PageGetItemId(page, offnum);
-			IndexTuple	ituple = (IndexTuple) PageGetItem(page, iid);
+			IndexTuple	ituple = PageGetItemIndex(page, iid);
 
 			if (ItemPointerEquals(&ituple->t_tid, &kitem->heapTid))
 			{
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index c536e22..9162cae 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -584,7 +584,7 @@ btree_xlog_delete_get_latestRemovedXid(XLogReaderState *record)
 		 * Identify the index tuple about to be deleted
 		 */
 		iitemid = PageGetItemId(ipage, unused[i]);
-		itup = (IndexTuple) PageGetItem(ipage, iitemid);
+		itup = PageGetItemIndex(ipage, iitemid);
 
 		/*
 		 * Locate the heap page that the index tuple points at
@@ -626,7 +626,7 @@ btree_xlog_delete_get_latestRemovedXid(XLogReaderState *record)
 		 */
 		if (ItemIdHasStorage(hitemid))
 		{
-			htuphdr = (HeapTupleHeader) PageGetItem(hpage, hitemid);
+			htuphdr = PageGetItemHeapHeaderOnly(hpage, hitemid);
 
 			HeapTupleHeaderAdvanceLatestRemovedXid(htuphdr, &latestRemovedXid);
 		}
@@ -752,11 +752,11 @@ btree_xlog_mark_page_halfdead(uint8 info, XLogReaderState *record)
 
 		nextoffset = OffsetNumberNext(poffset);
 		itemid = PageGetItemId(page, nextoffset);
-		itup = (IndexTuple) PageGetItem(page, itemid);
+		itup = PageGetItemIndex(page, itemid);
 		rightsib = ItemPointerGetBlockNumber(&itup->t_tid);
 
 		itemid = PageGetItemId(page, poffset);
-		itup = (IndexTuple) PageGetItem(page, itemid);
+		itup = PageGetItemIndex(page, itemid);
 		ItemPointerSet(&(itup->t_tid), rightsib, P_HIKEY);
 		nextoffset = OffsetNumberNext(poffset);
 		PageIndexTupleDelete(page, nextoffset);
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 5fcedd7..4e08899 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -1056,7 +1056,7 @@ acquire_sample_rows(Relation onerel, int elevel,
 			ItemPointerSet(&targtuple.t_self, targblock, targoffset);
 
 			targtuple.t_tableOid = RelationGetRelid(onerel);
-			targtuple.t_data = (HeapTupleHeader) PageGetItem(targpage, itemid);
+			targtuple.t_data = PageGetItemHeap(targpage, itemid);
 			targtuple.t_len = ItemIdGetLength(itemid);
 
 			switch (HeapTupleSatisfiesVacuum(&targtuple,
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index c98f981..c3312d1 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -1126,7 +1126,7 @@ read_seq_tuple(SeqTable elm, Relation rel, Buffer *buf, HeapTuple seqtuple)
 	Assert(ItemIdIsNormal(lp));
 
 	/* Note we currently only bother to set these two fields of *seqtuple */
-	seqtuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
+	seqtuple->t_data = PageGetItemHeap(page, lp);
 	seqtuple->t_len = ItemIdGetLength(lp);
 
 	/*
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 99a659a..969a5e9 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -2803,7 +2803,7 @@ ltrmark:;
 
 		Assert(ItemIdIsNormal(lp));
 
-		tuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+		tuple.t_data = PageGetItemHeap(page, lp);
 		tuple.t_len = ItemIdGetLength(lp);
 		tuple.t_self = *tid;
 		tuple.t_tableOid = RelationGetRelid(relation);
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 231e92d..5460bf7 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -957,7 +957,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 			Assert(ItemIdIsNormal(itemid));
 
-			tuple.t_data = (HeapTupleHeader) PageGetItem(page, itemid);
+			tuple.t_data = PageGetItemHeap(page, itemid);
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
@@ -991,6 +991,11 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 					break;
 				case HEAPTUPLE_LIVE:
 					/* Tuple is good --- but let's do some validity checks */
+					/*
+					 * Only this warning doesn't allow to replace PageGetItemHeap with
+					 * PageGetItemHeapHeaderOnly. We don't need any tuple's arguments
+					 * for vacuuming.
+					 */
 					if (onerel->rd_rel->relhasoids &&
 						!OidIsValid(HeapTupleGetOid(&tuple)))
 						elog(WARNING, "relation \"%s\" TID %u/%u: OID is invalid",
@@ -1100,7 +1105,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				HeapTupleHeader htup;
 
 				itemid = PageGetItemId(page, frozen[i].offset);
-				htup = (HeapTupleHeader) PageGetItem(page, itemid);
+				htup = PageGetItemHeapHeaderOnly(page, itemid);
 
 				heap_execute_freeze_tuple(htup, &frozen[i]);
 			}
@@ -1558,7 +1563,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 		if (!ItemIdIsNormal(itemid))
 			continue;
 
-		tupleheader = (HeapTupleHeader) PageGetItem(page, itemid);
+		tupleheader = PageGetItemHeapHeaderOnly(page, itemid);
 
 		if (heap_tuple_needs_freeze(tupleheader, FreezeLimit,
 									MultiXactCutoff, buf))
@@ -2098,7 +2103,7 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 
 		Assert(ItemIdIsNormal(itemid));
 
-		tuple.t_data = (HeapTupleHeader) PageGetItem(page, itemid);
+		tuple.t_data = PageGetItemHeapHeaderOnly(page, itemid);
 		tuple.t_len = ItemIdGetLength(itemid);
 		tuple.t_tableOid = RelationGetRelid(rel);
 
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 449aacb..aa27925 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -261,7 +261,7 @@ BitmapHeapNext(BitmapHeapScanState *node)
 		lp = PageGetItemId(dp, targoffset);
 		Assert(ItemIdIsNormal(lp));
 
-		scan->rs_ctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lp);
+		scan->rs_ctup.t_data = PageGetItemHeap((Page) dp, lp);
 		scan->rs_ctup.t_len = ItemIdGetLength(lp);
 		scan->rs_ctup.t_tableOid = scan->rs_rd->rd_id;
 		ItemPointerSet(&scan->rs_ctup.t_self, tbmres->blockno, targoffset);
@@ -388,7 +388,7 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			lp = PageGetItemId(dp, offnum);
 			if (!ItemIdIsNormal(lp))
 				continue;
-			loctup.t_data = (HeapTupleHeader) PageGetItem((Page) dp, lp);
+			loctup.t_data = PageGetItemHeapHeaderOnly((Page) dp, lp);
 			loctup.t_len = ItemIdGetLength(lp);
 			loctup.t_tableOid = scan->rs_rd->rd_id;
 			ItemPointerSet(&loctup.t_self, page, offnum);
diff --git a/src/backend/executor/nodeSamplescan.c b/src/backend/executor/nodeSamplescan.c
index 9ce7c02..33e8b55 100644
--- a/src/backend/executor/nodeSamplescan.c
+++ b/src/backend/executor/nodeSamplescan.c
@@ -461,7 +461,7 @@ tablesample_getnext(SampleScanState *scanstate)
 			if (!ItemIdIsNormal(itemid))
 				continue;
 
-			tuple->t_data = (HeapTupleHeader) PageGetItem(page, itemid);
+			tuple->t_data = PageGetItemHeap(page, itemid);
 			tuple->t_len = ItemIdGetLength(itemid);
 			ItemPointerSet(&(tuple->t_self), blockno, tupoffset);
 
diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h
index 15cebfc..be6bd28 100644
--- a/src/include/storage/bufpage.h
+++ b/src/include/storage/bufpage.h
@@ -341,6 +341,31 @@ PageValidateSpecialPointer(Page page)
 	(Item)(((char *)(page)) + ItemIdGetOffset(itemId)) \
 )
 
+ /*
+ * Split PageGetItem into set of different macros
+ * in order to make code more readable.
+ */
+#define PageGetItemHeap(page, itemId) \
+( \
+	AssertMacro(PageIsValid(page)), \
+	AssertMacro(ItemIdHasStorage(itemId)), \
+	(HeapTupleHeader)(((char *)(page)) + ItemIdGetOffset(itemId)) \
+)
+
+#define PageGetItemHeapHeaderOnly(page, itemId) \
+( \
+	AssertMacro(PageIsValid(page)), \
+	AssertMacro(ItemIdHasStorage(itemId)), \
+	(HeapTupleHeader)(((char *)(page)) + ItemIdGetOffset(itemId)) \
+)
+
+#define PageGetItemIndex(page, itemId) \
+( \
+	AssertMacro(PageIsValid(page)), \
+	AssertMacro(ItemIdHasStorage(itemId)), \
+	(IndexTuple)(((char *)(page)) + ItemIdGetOffset(itemId)) \
+)
+
 /*
  * PageGetMaxOffsetNumber
  *		Returns the maximum offset number used by the given page.

p2_heapam_refactoring.patchtext/x-patch; name=p2_heapam_refactoring.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 90a6937..1161167 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -12,27 +12,40 @@
  *
  *
  * INTERFACE ROUTINES
- *		relation_open	- open any relation by relation OID
- *		relation_openrv - open any relation specified by a RangeVar
- *		relation_close	- close any relation
- *		heap_open		- open a heap relation by relation OID
- *		heap_openrv		- open a heap relation specified by a RangeVar
- *		heap_close		- (now just a macro for relation_close)
- *		heap_beginscan	- begin relation scan
- *		heap_rescan		- restart a relation scan
- *		heap_endscan	- end relation scan
+ *		heap_beginscan	- begin heap relation scan
+ *		heap_rescan		- restart a heap relation scan
+ *		heap_endscan	- end heap relation scan
  *		heap_getnext	- retrieve next tuple in scan
  *		heap_fetch		- retrieve tuple with given tid
- *		heap_insert		- insert tuple into a relation
- *		heap_multi_insert - insert multiple tuples into a relation
- *		heap_delete		- delete a tuple from a relation
- *		heap_update		- replace a tuple in a relation with another tuple
- *		heap_sync		- sync heap, for when no WAL has been written
+ *		heap_insert		- insert tuple into a heap relation
+ *		heap_multi_insert - insert multiple tuples into a heap relation
+ *		heap_delete		- delete a tuple from a heap relation
+ *		heap_update		- replace a tuple in a heap relation with another tuple
+ *
+ *	Heap AM code is spreaded across many files.
+ *	Files containing interface routines for heap AM:
+ *		Vacuum routines - src/backend/commands/vacuumlazy.c
+ *		CLUSTER and VACUUM FULL - rewriteheap.c
+ *		Page pruning and HOT-chain management - pruneheap.c
+ *		Routines related to interaction with buffer manager - hio.c
+ *		Heap creation, destroying and truncation - src/backend/catalog/heap.c
  *
  * NOTES
- *	  This file contains the heap_ routines which implement
- *	  the POSTGRES heap access method used for all POSTGRES
- *	  relations.
+ *	This file contains the heap_ routines which implement
+ *	the POSTGRES heap access method used for POSTGRES
+ *	relations of following kinds:
+ *		RELKIND_RELATION
+ *		RELKIND_SEQUENCE
+ *		RELKIND_TOASTVALUE
+ *		RELKIND_MATVIEW
+ *
+ *	Relations of types
+ *		RELKIND_VIEW
+ *		RELKIND_COMPOSITE_TYPE
+ *		RELKIND_FOREIGN_TABLE
+ *	have no physical storage, so they are not actually heap relations.
+ *	If they occur in heap functions, it's an overlooked code that
+ *	requres refactoring.
  *
  *-------------------------------------------------------------------------
  */
@@ -204,16 +217,16 @@ static const int MultiXactStatusLock[MaxMultiXactStatus + 1] =
 			(MultiXactStatusLock[(status)])
 
 /* ----------------------------------------------------------------
- *						 heap support routines
+ *						 heap scan routines
  * ----------------------------------------------------------------
  */
 
 /* ----------------
- *		initscan - scan code common to heap_beginscan and heap_rescan
+ *		heap_initscan - scan code common to heap_beginscan and heap_rescan
  * ----------------
  */
 static void
-initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)
+heap_initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)
 {
 	bool		allow_strat;
 	bool		allow_sync;
@@ -337,9 +350,12 @@ heap_setscanlimits(HeapScanDesc scan, BlockNumber startBlk, BlockNumber numBlks)
 }
 
 /*
- * heapgetpage - subroutine for heapgettup()
+ * heapgetpage - this function is a subroutine for heapgettup().
+ *	But now it is also used by tablesample_getnext, so cannot be marked
+ *	as a static function.
  *
- * This routine reads and pins the specified page of the relation.
+ * This routine reads and pins the specified page of the heap relation.
+ * Besides, it prunes the page, if possible.
  * In page-at-a-time mode it performs additional work, namely determining
  * which tuples on the page are visible.
  */
@@ -1092,281 +1108,8 @@ fastgetattr(HeapTuple tup, int attnum, TupleDesc tupleDesc,
 }
 #endif   /* defined(DISABLE_COMPLEX_MACRO) */
 
-
-/* ----------------------------------------------------------------
- *					 heap access method interface
- * ----------------------------------------------------------------
- */
-
-/* ----------------
- *		relation_open - open any relation by relation OID
- *
- *		If lockmode is not "NoLock", the specified kind of lock is
- *		obtained on the relation.  (Generally, NoLock should only be
- *		used if the caller knows it has some appropriate lock on the
- *		relation already.)
- *
- *		An error is raised if the relation does not exist.
- *
- *		NB: a "relation" is anything with a pg_class entry.  The caller is
- *		expected to check whether the relkind is something it can handle.
- * ----------------
- */
-Relation
-relation_open(Oid relationId, LOCKMODE lockmode)
-{
-	Relation	r;
-
-	Assert(lockmode >= NoLock && lockmode < MAX_LOCKMODES);
-
-	/* Get the lock before trying to open the relcache entry */
-	if (lockmode != NoLock)
-		LockRelationOid(relationId, lockmode);
-
-	/* The relcache does all the real work... */
-	r = RelationIdGetRelation(relationId);
-
-	if (!RelationIsValid(r))
-		elog(ERROR, "could not open relation with OID %u", relationId);
-
-	/* Make note that we've accessed a temporary relation */
-	if (RelationUsesLocalBuffers(r))
-		MyXactAccessedTempRel = true;
-
-	pgstat_initstats(r);
-
-	return r;
-}
-
-/* ----------------
- *		try_relation_open - open any relation by relation OID
- *
- *		Same as relation_open, except return NULL instead of failing
- *		if the relation does not exist.
- * ----------------
- */
-Relation
-try_relation_open(Oid relationId, LOCKMODE lockmode)
-{
-	Relation	r;
-
-	Assert(lockmode >= NoLock && lockmode < MAX_LOCKMODES);
-
-	/* Get the lock first */
-	if (lockmode != NoLock)
-		LockRelationOid(relationId, lockmode);
-
-	/*
-	 * Now that we have the lock, probe to see if the relation really exists
-	 * or not.
-	 */
-	if (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relationId)))
-	{
-		/* Release useless lock */
-		if (lockmode != NoLock)
-			UnlockRelationOid(relationId, lockmode);
-
-		return NULL;
-	}
-
-	/* Should be safe to do a relcache load */
-	r = RelationIdGetRelation(relationId);
-
-	if (!RelationIsValid(r))
-		elog(ERROR, "could not open relation with OID %u", relationId);
-
-	/* Make note that we've accessed a temporary relation */
-	if (RelationUsesLocalBuffers(r))
-		MyXactAccessedTempRel = true;
-
-	pgstat_initstats(r);
-
-	return r;
-}
-
-/* ----------------
- *		relation_openrv - open any relation specified by a RangeVar
- *
- *		Same as relation_open, but the relation is specified by a RangeVar.
- * ----------------
- */
-Relation
-relation_openrv(const RangeVar *relation, LOCKMODE lockmode)
-{
-	Oid			relOid;
-
-	/*
-	 * Check for shared-cache-inval messages before trying to open the
-	 * relation.  This is needed even if we already hold a lock on the
-	 * relation, because GRANT/REVOKE are executed without taking any lock on
-	 * the target relation, and we want to be sure we see current ACL
-	 * information.  We can skip this if asked for NoLock, on the assumption
-	 * that such a call is not the first one in the current command, and so we
-	 * should be reasonably up-to-date already.  (XXX this all could stand to
-	 * be redesigned, but for the moment we'll keep doing this like it's been
-	 * done historically.)
-	 */
-	if (lockmode != NoLock)
-		AcceptInvalidationMessages();
-
-	/* Look up and lock the appropriate relation using namespace search */
-	relOid = RangeVarGetRelid(relation, lockmode, false);
-
-	/* Let relation_open do the rest */
-	return relation_open(relOid, NoLock);
-}
-
-/* ----------------
- *		relation_openrv_extended - open any relation specified by a RangeVar
- *
- *		Same as relation_openrv, but with an additional missing_ok argument
- *		allowing a NULL return rather than an error if the relation is not
- *		found.  (Note that some other causes, such as permissions problems,
- *		will still result in an ereport.)
- * ----------------
- */
-Relation
-relation_openrv_extended(const RangeVar *relation, LOCKMODE lockmode,
-						 bool missing_ok)
-{
-	Oid			relOid;
-
-	/*
-	 * Check for shared-cache-inval messages before trying to open the
-	 * relation.  See comments in relation_openrv().
-	 */
-	if (lockmode != NoLock)
-		AcceptInvalidationMessages();
-
-	/* Look up and lock the appropriate relation using namespace search */
-	relOid = RangeVarGetRelid(relation, lockmode, missing_ok);
-
-	/* Return NULL on not-found */
-	if (!OidIsValid(relOid))
-		return NULL;
-
-	/* Let relation_open do the rest */
-	return relation_open(relOid, NoLock);
-}
-
-/* ----------------
- *		relation_close - close any relation
- *
- *		If lockmode is not "NoLock", we then release the specified lock.
- *
- *		Note that it is often sensible to hold a lock beyond relation_close;
- *		in that case, the lock is released automatically at xact end.
- * ----------------
- */
-void
-relation_close(Relation relation, LOCKMODE lockmode)
-{
-	LockRelId	relid = relation->rd_lockInfo.lockRelId;
-
-	Assert(lockmode >= NoLock && lockmode < MAX_LOCKMODES);
-
-	/* The relcache does the real work... */
-	RelationClose(relation);
-
-	if (lockmode != NoLock)
-		UnlockRelationId(&relid, lockmode);
-}
-
-
-/* ----------------
- *		heap_open - open a heap relation by relation OID
- *
- *		This is essentially relation_open plus check that the relation
- *		is not an index nor a composite type.  (The caller should also
- *		check that it's not a view or foreign table before assuming it has
- *		storage.)
- * ----------------
- */
-Relation
-heap_open(Oid relationId, LOCKMODE lockmode)
-{
-	Relation	r;
-
-	r = relation_open(relationId, lockmode);
-
-	if (r->rd_rel->relkind == RELKIND_INDEX)
-		ereport(ERROR,
-				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
-				 errmsg("\"%s\" is an index",
-						RelationGetRelationName(r))));
-	else if (r->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
-		ereport(ERROR,
-				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
-				 errmsg("\"%s\" is a composite type",
-						RelationGetRelationName(r))));
-
-	return r;
-}
-
-/* ----------------
- *		heap_openrv - open a heap relation specified
- *		by a RangeVar node
- *
- *		As above, but relation is specified by a RangeVar.
- * ----------------
- */
-Relation
-heap_openrv(const RangeVar *relation, LOCKMODE lockmode)
-{
-	Relation	r;
-
-	r = relation_openrv(relation, lockmode);
-
-	if (r->rd_rel->relkind == RELKIND_INDEX)
-		ereport(ERROR,
-				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
-				 errmsg("\"%s\" is an index",
-						RelationGetRelationName(r))));
-	else if (r->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
-		ereport(ERROR,
-				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
-				 errmsg("\"%s\" is a composite type",
-						RelationGetRelationName(r))));
-
-	return r;
-}
-
-/* ----------------
- *		heap_openrv_extended - open a heap relation specified
- *		by a RangeVar node
- *
- *		As above, but optionally return NULL instead of failing for
- *		relation-not-found.
- * ----------------
- */
-Relation
-heap_openrv_extended(const RangeVar *relation, LOCKMODE lockmode,
-					 bool missing_ok)
-{
-	Relation	r;
-
-	r = relation_openrv_extended(relation, lockmode, missing_ok);
-
-	if (r)
-	{
-		if (r->rd_rel->relkind == RELKIND_INDEX)
-			ereport(ERROR,
-					(errcode(ERRCODE_WRONG_OBJECT_TYPE),
-					 errmsg("\"%s\" is an index",
-							RelationGetRelationName(r))));
-		else if (r->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
-			ereport(ERROR,
-					(errcode(ERRCODE_WRONG_OBJECT_TYPE),
-					 errmsg("\"%s\" is a composite type",
-							RelationGetRelationName(r))));
-	}
-
-	return r;
-}
-
-
 /* ----------------
- *		heap_beginscan	- begin relation scan
+ *		heap_beginscan	- begin heap relation scan
  *
  * heap_beginscan is the "standard" case.
  *
@@ -1435,6 +1178,9 @@ heap_beginscan_sampling(Relation relation, Snapshot snapshot,
 								   false, true, false);
 }
 
+/*
+ * heap_beginscan_internal - a workhorse for the functions above
+ */
 static HeapScanDesc
 heap_beginscan_internal(Relation relation, Snapshot snapshot,
 						int nkeys, ScanKey key,
@@ -1496,21 +1242,21 @@ heap_beginscan_internal(Relation relation, Snapshot snapshot,
 	scan->rs_ctup.t_tableOid = RelationGetRelid(relation);
 
 	/*
-	 * we do this here instead of in initscan() because heap_rescan also calls
-	 * initscan() and we don't want to allocate memory again
+	 * we do this here instead of in heap_initscan() because heap_rescan also
+	 * calls heap_initscan() and we don't want to allocate memory again
 	 */
 	if (nkeys > 0)
 		scan->rs_key = (ScanKey) palloc(sizeof(ScanKeyData) * nkeys);
 	else
 		scan->rs_key = NULL;
 
-	initscan(scan, key, false);
+	heap_initscan(scan, key, false);
 
 	return scan;
 }
 
 /* ----------------
- *		heap_rescan		- restart a relation scan
+ *		heap_rescan		- restart a heap relation scan
  * ----------------
  */
 void
@@ -1526,7 +1272,7 @@ heap_rescan(HeapScanDesc scan,
 	/*
 	 * reinitialize scan descriptor
 	 */
-	initscan(scan, key, true);
+	heap_initscan(scan, key, true);
 
 	/*
 	 * reset parallel scan, if present
@@ -1549,7 +1295,8 @@ heap_rescan(HeapScanDesc scan,
 }
 
 /* ----------------
- *		heap_rescan_set_params	- restart a relation scan after changing params
+ *		heap_rescan_set_params	- restart a heap relation scan after changing
+ *								  params
  *
  * This call allows changing the buffer strategy, syncscan, and pagemode
  * options before starting a fresh scan.  Note that although the actual use
@@ -1570,10 +1317,7 @@ heap_rescan_set_params(HeapScanDesc scan, ScanKey key,
 }
 
 /* ----------------
- *		heap_endscan	- end relation scan
- *
- *		See how to integrate with index scans.
- *		Check handling if reldesc caching.
+ *		heap_endscan - free resourses at the end of a heap relation scan
  * ----------------
  */
 void
@@ -1604,6 +1348,12 @@ heap_endscan(HeapScanDesc scan)
 	pfree(scan);
 }
 
+
+/* ----------------------------------------------------------------
+ *				 heap parallel scan support routines
+ * ----------------------------------------------------------------
+ */
+
 /* ----------------
  *		heap_parallelscan_estimate - estimate storage for ParallelHeapScanDesc
  *
@@ -1632,7 +1382,7 @@ heap_parallelscan_initialize(ParallelHeapScanDesc target, Relation relation,
 {
 	target->phs_relid = RelationGetRelid(relation);
 	target->phs_nblocks = RelationGetNumberOfBlocks(relation);
-	/* compare phs_syncscan initialization to similar logic in initscan */
+	/* compare phs_syncscan initialization to similar logic in heap_initscan */
 	target->phs_syncscan = synchronize_seqscans &&
 		!RelationUsesLocalBuffers(relation) &&
 		target->phs_nblocks > NBuffers / 4;
@@ -1752,15 +1502,6 @@ retry:
 	return page;
 }
 
-/* ----------------
- *		heap_getnext	- retrieve next tuple in scan
- *
- *		Fix to work with index relations.
- *		We don't return the buffer anymore, but you can get it from the
- *		returned HeapTuple.
- * ----------------
- */
-
 #ifdef HEAPDEBUGALL
 #define HEAPDEBUG_1 \
 	elog(DEBUG2, "heap_getnext([%s,nkeys=%d],dir=%d) called", \
@@ -1775,7 +1516,12 @@ retry:
 #define HEAPDEBUG_3
 #endif   /* !defined(HEAPDEBUGALL) */
 
-
+/* ----------------
+ *		heap_getnext - retrieve next heap tuple in scan.
+ * This function is an implementation of amgettuple interface,
+ * that should be used by external callers.
+ * ----------------
+ */
 HeapTuple
 heap_getnext(HeapScanDesc scan, ScanDirection direction)
 {
@@ -1796,8 +1542,8 @@ heap_getnext(HeapScanDesc scan, ScanDirection direction)
 	}
 
 	/*
-	 * if we get here it means we have a new current scan tuple, so point to
-	 * the proper return buffer and return the tuple.
+	 * if we get here it means we have a new current scan tuple, so increase
+	 * stats counter and return the tuple.
 	 */
 	HEAPDEBUG_3;				/* heap_getnext returning tuple */
 
@@ -2262,42 +2008,59 @@ heap_get_latest_tid(Relation relation,
 	}							/* end of loop */
 }
 
+/* ----------------------------------------------------------------
+ *						 heap insert routines
+ * ----------------------------------------------------------------
+ */
 
 /*
- * UpdateXmaxHintBits - update tuple hint bits after xmax transaction ends
- *
- * This is called after we have waited for the XMAX transaction to terminate.
- * If the transaction aborted, we guarantee the XMAX_INVALID hint bit will
- * be set on exit.  If the transaction committed, we set the XMAX_COMMITTED
- * hint bit if possible --- but beware that that may not yet be possible,
- * if the transaction committed asynchronously.
+ * RelationPutHeapTuple - place tuple at specified page
  *
- * Note that if the transaction was a locker only, we set HEAP_XMAX_INVALID
- * even if it commits.
+ * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
- * Hence callers should look only at XMAX_INVALID.
- *
- * Note this is not allowed for tuples whose xmax is a multixact.
+ * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
  */
 static void
-UpdateXmaxHintBits(HeapTupleHeader tuple, Buffer buffer, TransactionId xid)
+RelationPutHeapTuple(Relation relation,
+					 Buffer buffer,
+					 HeapTuple tuple,
+					 bool token)
 {
-	Assert(TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple), xid));
-	Assert(!(tuple->t_infomask & HEAP_XMAX_IS_MULTI));
+	Page		pageHeader;
+	OffsetNumber offnum;
 
-	if (!(tuple->t_infomask & (HEAP_XMAX_COMMITTED | HEAP_XMAX_INVALID)))
+	/*
+	 * A tuple that's being inserted speculatively should already have its
+	 * token set.
+	 */
+	Assert(!token || HeapTupleHeaderIsSpeculative(tuple->t_data));
+
+	/* Add the tuple to the page */
+	pageHeader = BufferGetPage(buffer);
+
+	offnum = PageAddItem(pageHeader, (Item) tuple->t_data,
+						 tuple->t_len, InvalidOffsetNumber, false, true);
+
+	if (offnum == InvalidOffsetNumber)
+		elog(PANIC, "failed to add tuple to page");
+
+	/* Update tuple->t_self to the actual position where it was stored */
+	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
+
+	/*
+	 * Insert the correct position into CTID of the stored tuple, too (unless
+	 * this is a speculative insertion, in which case the token is held in
+	 * CTID field instead)
+	 */
+	if (!token)
 	{
-		if (!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask) &&
-			TransactionIdDidCommit(xid))
-			HeapTupleSetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED,
-								 xid);
-		else
-			HeapTupleSetHintBits(tuple, buffer, HEAP_XMAX_INVALID,
-								 InvalidTransactionId);
+		ItemId			itemId = PageGetItemId(pageHeader, offnum);
+		HeapTupleHeader	onpage_tup = PageGetItemHeapHeaderOnly(pageHeader, itemId);
+
+		onpage_tup->t_ctid = tuple->t_self;
 	}
 }
 
-
 /*
  * GetBulkInsertState - prepare status object for a bulk insert
  */
@@ -2324,7 +2087,6 @@ FreeBulkInsertState(BulkInsertState bistate)
 	pfree(bistate);
 }
 
-
 /*
  *	heap_insert		- insert tuple into a heap
  *
@@ -2916,6 +2678,45 @@ simple_heap_insert(Relation relation, HeapTuple tup)
 	return heap_insert(relation, tup, GetCurrentCommandId(true), 0, NULL);
 }
 
+/* ----------------------------------------------------------------
+ *						 heap delete/update routines
+ * ----------------------------------------------------------------
+ */
+
+/*
+ * UpdateXmaxHintBits - update tuple hint bits after xmax transaction ends
+ *
+ * This is called after we have waited for the XMAX transaction to terminate.
+ * If the transaction aborted, we guarantee the XMAX_INVALID hint bit will
+ * be set on exit.  If the transaction committed, we set the XMAX_COMMITTED
+ * hint bit if possible --- but beware that that may not yet be possible,
+ * if the transaction committed asynchronously.
+ *
+ * Note that if the transaction was a locker only, we set HEAP_XMAX_INVALID
+ * even if it commits.
+ *
+ * Hence callers should look only at XMAX_INVALID.
+ *
+ * Note this is not allowed for tuples whose xmax is a multixact.
+ */
+static void
+UpdateXmaxHintBits(HeapTupleHeader tuple, Buffer buffer, TransactionId xid)
+{
+	Assert(TransactionIdEquals(HeapTupleHeaderGetRawXmax(tuple), xid));
+	Assert(!(tuple->t_infomask & HEAP_XMAX_IS_MULTI));
+
+	if (!(tuple->t_infomask & (HEAP_XMAX_COMMITTED | HEAP_XMAX_INVALID)))
+	{
+		if (!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_infomask) &&
+			TransactionIdDidCommit(xid))
+			HeapTupleSetHintBits(tuple, buffer, HEAP_XMAX_COMMITTED,
+								 xid);
+		else
+			HeapTupleSetHintBits(tuple, buffer, HEAP_XMAX_INVALID,
+								 InvalidTransactionId);
+	}
+}
+
 /*
  * Given infomask/infomask2, compute the bits that must be saved in the
  * "infobits" field of xl_heap_delete, xl_heap_update, xl_heap_lock,
@@ -4503,6 +4304,109 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	}
 }
 
+/*
+ * heap_inplace_update - update a tuple "in place" (ie, overwrite it)
+ *
+ * Overwriting violates both MVCC and transactional safety, so the uses
+ * of this function in Postgres are extremely limited.  Nonetheless we
+ * find some places to use it.
+ *
+ * The tuple cannot change size, and therefore it's reasonable to assume
+ * that its null bitmap (if any) doesn't change either.  So we just
+ * overwrite the data portion of the tuple without touching the null
+ * bitmap or any of the header fields.
+ *
+ * tuple is an in-memory tuple structure containing the data to be written
+ * over the target tuple.  Also, tuple->t_self identifies the target tuple.
+ */
+void
+heap_inplace_update(Relation relation, HeapTuple tuple)
+{
+	Buffer		buffer;
+	Page		page;
+	OffsetNumber offnum;
+	ItemId		lp = NULL;
+	HeapTupleHeader htup;
+	uint32		oldlen;
+	uint32		newlen;
+
+	/*
+	 * For now, parallel operations are required to be strictly read-only.
+	 * Unlike a regular update, this should never create a combo CID, so it
+	 * might be possible to relax this restriction, but not without more
+	 * thought and testing.  It's not clear that it would be useful, anyway.
+	 */
+	if (IsInParallelMode())
+		ereport(ERROR,
+				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
+				 errmsg("cannot update tuples during a parallel operation")));
+
+	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(&(tuple->t_self)));
+	LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
+	page = (Page) BufferGetPage(buffer);
+
+	offnum = ItemPointerGetOffsetNumber(&(tuple->t_self));
+	if (PageGetMaxOffsetNumber(page) >= offnum)
+		lp = PageGetItemId(page, offnum);
+
+	if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp))
+		elog(ERROR, "invalid lp");
+
+	htup = PageGetItemHeap(page, lp);
+
+	oldlen = ItemIdGetLength(lp) - htup->t_hoff;
+	newlen = tuple->t_len - tuple->t_data->t_hoff;
+	if (oldlen != newlen || htup->t_hoff != tuple->t_data->t_hoff)
+		elog(ERROR, "wrong tuple length");
+
+	/* NO EREPORT(ERROR) from here till changes are logged */
+	START_CRIT_SECTION();
+
+	memcpy((char *) htup + htup->t_hoff,
+		   (char *) tuple->t_data + tuple->t_data->t_hoff,
+		   newlen);
+
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(relation))
+	{
+		xl_heap_inplace xlrec;
+		XLogRecPtr	recptr;
+
+		xlrec.offnum = ItemPointerGetOffsetNumber(&tuple->t_self);
+
+		XLogBeginInsert();
+		XLogRegisterData((char *) &xlrec, SizeOfHeapInplace);
+
+		XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+		XLogRegisterBufData(0, (char *) htup + htup->t_hoff, newlen);
+
+		/* inplace updates aren't decoded atm, don't log the origin */
+
+		recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_INPLACE);
+
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	UnlockReleaseBuffer(buffer);
+
+	/*
+	 * Send out shared cache inval if necessary.  Note that because we only
+	 * pass the new version of the tuple, this mustn't be used for any
+	 * operations that could change catcache lookup keys.  But we aren't
+	 * bothering with index updates either, so that's true a fortiori.
+	 */
+	if (!IsBootstrapProcessingMode())
+		CacheInvalidateHeapTuple(relation, tuple, NULL);
+}
+
+/* ----------------------------------------------------------------
+ *						 heap lock tuple routines
+ * ----------------------------------------------------------------
+ */
 
 /*
  * Return the MultiXactStatus corresponding to the given tuple lock mode.
@@ -6153,104 +6057,10 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	pgstat_count_heap_delete(relation);
 }
 
-/*
- * heap_inplace_update - update a tuple "in place" (ie, overwrite it)
- *
- * Overwriting violates both MVCC and transactional safety, so the uses
- * of this function in Postgres are extremely limited.  Nonetheless we
- * find some places to use it.
- *
- * The tuple cannot change size, and therefore it's reasonable to assume
- * that its null bitmap (if any) doesn't change either.  So we just
- * overwrite the data portion of the tuple without touching the null
- * bitmap or any of the header fields.
- *
- * tuple is an in-memory tuple structure containing the data to be written
- * over the target tuple.  Also, tuple->t_self identifies the target tuple.
+/* ----------------------------------------------------------------
+ *						 heap multixact and freeze tuple routines
+ * ----------------------------------------------------------------
  */
-void
-heap_inplace_update(Relation relation, HeapTuple tuple)
-{
-	Buffer		buffer;
-	Page		page;
-	OffsetNumber offnum;
-	ItemId		lp = NULL;
-	HeapTupleHeader htup;
-	uint32		oldlen;
-	uint32		newlen;
-
-	/*
-	 * For now, parallel operations are required to be strictly read-only.
-	 * Unlike a regular update, this should never create a combo CID, so it
-	 * might be possible to relax this restriction, but not without more
-	 * thought and testing.  It's not clear that it would be useful, anyway.
-	 */
-	if (IsInParallelMode())
-		ereport(ERROR,
-				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
-				 errmsg("cannot update tuples during a parallel operation")));
-
-	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(&(tuple->t_self)));
-	LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
-	page = (Page) BufferGetPage(buffer);
-
-	offnum = ItemPointerGetOffsetNumber(&(tuple->t_self));
-	if (PageGetMaxOffsetNumber(page) >= offnum)
-		lp = PageGetItemId(page, offnum);
-
-	if (PageGetMaxOffsetNumber(page) < offnum || !ItemIdIsNormal(lp))
-		elog(ERROR, "invalid lp");
-
-	htup = PageGetItemHeap(page, lp);
-
-	oldlen = ItemIdGetLength(lp) - htup->t_hoff;
-	newlen = tuple->t_len - tuple->t_data->t_hoff;
-	if (oldlen != newlen || htup->t_hoff != tuple->t_data->t_hoff)
-		elog(ERROR, "wrong tuple length");
-
-	/* NO EREPORT(ERROR) from here till changes are logged */
-	START_CRIT_SECTION();
-
-	memcpy((char *) htup + htup->t_hoff,
-		   (char *) tuple->t_data + tuple->t_data->t_hoff,
-		   newlen);
-
-	MarkBufferDirty(buffer);
-
-	/* XLOG stuff */
-	if (RelationNeedsWAL(relation))
-	{
-		xl_heap_inplace xlrec;
-		XLogRecPtr	recptr;
-
-		xlrec.offnum = ItemPointerGetOffsetNumber(&tuple->t_self);
-
-		XLogBeginInsert();
-		XLogRegisterData((char *) &xlrec, SizeOfHeapInplace);
-
-		XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
-		XLogRegisterBufData(0, (char *) htup + htup->t_hoff, newlen);
-
-		/* inplace updates aren't decoded atm, don't log the origin */
-
-		recptr = XLogInsert(RM_HEAP_ID, XLOG_HEAP_INPLACE);
-
-		PageSetLSN(page, recptr);
-	}
-
-	END_CRIT_SECTION();
-
-	UnlockReleaseBuffer(buffer);
-
-	/*
-	 * Send out shared cache inval if necessary.  Note that because we only
-	 * pass the new version of the tuple, this mustn't be used for any
-	 * operations that could change catcache lookup keys.  But we aren't
-	 * bothering with index updates either, so that's true a fortiori.
-	 */
-	if (!IsBootstrapProcessingMode())
-		CacheInvalidateHeapTuple(relation, tuple, NULL);
-}
 
 #define		FRM_NOOP				0x0001
 #define		FRM_INVALIDATE_XMAX		0x0002
@@ -7285,6 +7095,11 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	/* *latestRemovedXid may still be invalid at end */
 }
 
+/* ----------------------------------------------------------------
+ *						 heap log routines
+ * ----------------------------------------------------------------
+ */
+
 /*
  * Perform XLogInsert to register a heap cleanup info message. These
  * messages are sent once per VACUUM and are required because
@@ -9041,43 +8856,3 @@ heap2_redo(XLogReaderState *record)
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
 }
-
-/*
- *	heap_sync		- sync a heap, for use when no WAL has been written
- *
- * This forces the heap contents (including TOAST heap if any) down to disk.
- * If we skipped using WAL, and WAL is otherwise needed, we must force the
- * relation down to disk before it's safe to commit the transaction.  This
- * requires writing out any dirty buffers and then doing a forced fsync.
- *
- * Indexes are not touched.  (Currently, index operations associated with
- * the commands that use this are WAL-logged and so do not need fsync.
- * That behavior might change someday, but in any case it's likely that
- * any fsync decisions required would be per-index and hence not appropriate
- * to be done here.)
- */
-void
-heap_sync(Relation rel)
-{
-	/* non-WAL-logged tables never need fsync */
-	if (!RelationNeedsWAL(rel))
-		return;
-
-	/* main heap */
-	FlushRelationBuffers(rel);
-	/* FlushRelationBuffers will have opened rd_smgr */
-	smgrimmedsync(rel->rd_smgr, MAIN_FORKNUM);
-
-	/* FSM is not critical, don't bother syncing it */
-
-	/* toast heap, if any */
-	if (OidIsValid(rel->rd_rel->reltoastrelid))
-	{
-		Relation	toastrel;
-
-		toastrel = heap_open(rel->rd_rel->reltoastrelid, AccessShareLock);
-		FlushRelationBuffers(toastrel);
-		smgrimmedsync(toastrel->rd_smgr, MAIN_FORKNUM);
-		heap_close(toastrel, AccessShareLock);
-	}
-}
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index e4ef0e1..d89bdb5 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -10,6 +10,13 @@
  * IDENTIFICATION
  *	  src/backend/access/heap/hio.c
  *
+ *		relation_open	- open any relation by relation OID
+ *		relation_openrv - open any relation specified by a RangeVar
+ *		relation_close	- close any relation
+ *		heap_open		- open a heap relation by relation OID
+ *		heap_openrv		- open a heap relation specified by a RangeVar
+ *		heap_close		- (now just a macro for relation_close)
+ *
  *-------------------------------------------------------------------------
  */
 
@@ -23,56 +30,320 @@
 #include "storage/freespace.h"
 #include "storage/lmgr.h"
 #include "storage/smgr.h"
+#include "access/xact.h"
+#include "pgstat.h"
+#include "utils/syscache.h"
+#include "utils/inval.h"
+#include "catalog/namespace.h"
 
 
-/*
- * RelationPutHeapTuple - place tuple at specified page
+/* ----------------
+ *		relation_open - open any relation by relation OID
+ *
+ *		If lockmode is not "NoLock", the specified kind of lock is
+ *		obtained on the relation.  (Generally, NoLock should only be
+ *		used if the caller knows it has some appropriate lock on the
+ *		relation already.)
  *
- * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
+ *		An error is raised if the relation does not exist.
  *
- * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *		NB: a "relation" is anything with a pg_class entry.  The caller is
+ *		expected to check whether the relkind is something it can handle.
+ * ----------------
  */
-void
-RelationPutHeapTuple(Relation relation,
-					 Buffer buffer,
-					 HeapTuple tuple,
-					 bool token)
+Relation
+relation_open(Oid relationId, LOCKMODE lockmode)
+{
+	Relation	r;
+
+	Assert(lockmode >= NoLock && lockmode < MAX_LOCKMODES);
+
+	/* Get the lock before trying to open the relcache entry */
+	if (lockmode != NoLock)
+		LockRelationOid(relationId, lockmode);
+
+	/* The relcache does all the real work... */
+	r = RelationIdGetRelation(relationId);
+
+	if (!RelationIsValid(r))
+		elog(ERROR, "could not open relation with OID %u", relationId);
+
+	/* Make note that we've accessed a temporary relation */
+	if (RelationUsesLocalBuffers(r))
+		MyXactAccessedTempRel = true;
+
+	pgstat_initstats(r);
+
+	return r;
+}
+
+/* ----------------
+ *		try_relation_open - open any relation by relation OID
+ *
+ *		Same as relation_open, except return NULL instead of failing
+ *		if the relation does not exist.
+ * ----------------
+ */
+Relation
+try_relation_open(Oid relationId, LOCKMODE lockmode)
 {
-	Page		pageHeader;
-	OffsetNumber offnum;
+	Relation	r;
+
+	Assert(lockmode >= NoLock && lockmode < MAX_LOCKMODES);
+
+	/* Get the lock first */
+	if (lockmode != NoLock)
+		LockRelationOid(relationId, lockmode);
 
 	/*
-	 * A tuple that's being inserted speculatively should already have its
-	 * token set.
+	 * Now that we have the lock, probe to see if the relation really exists
+	 * or not.
 	 */
-	Assert(!token || HeapTupleHeaderIsSpeculative(tuple->t_data));
+	if (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(relationId)))
+	{
+		/* Release useless lock */
+		if (lockmode != NoLock)
+			UnlockRelationOid(relationId, lockmode);
+
+		return NULL;
+	}
 
-	/* Add the tuple to the page */
-	pageHeader = BufferGetPage(buffer);
+	/* Should be safe to do a relcache load */
+	r = RelationIdGetRelation(relationId);
 
-	offnum = PageAddItem(pageHeader, (Item) tuple->t_data,
-						 tuple->t_len, InvalidOffsetNumber, false, true);
+	if (!RelationIsValid(r))
+		elog(ERROR, "could not open relation with OID %u", relationId);
 
-	if (offnum == InvalidOffsetNumber)
-		elog(PANIC, "failed to add tuple to page");
+	/* Make note that we've accessed a temporary relation */
+	if (RelationUsesLocalBuffers(r))
+		MyXactAccessedTempRel = true;
 
-	/* Update tuple->t_self to the actual position where it was stored */
-	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
+	pgstat_initstats(r);
+
+	return r;
+}
+
+/* ----------------
+ *		relation_openrv - open any relation specified by a RangeVar
+ *
+ *		Same as relation_open, but the relation is specified by a RangeVar.
+ * ----------------
+ */
+Relation
+relation_openrv(const RangeVar *relation, LOCKMODE lockmode)
+{
+	Oid			relOid;
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Check for shared-cache-inval messages before trying to open the
+	 * relation.  This is needed even if we already hold a lock on the
+	 * relation, because GRANT/REVOKE are executed without taking any lock on
+	 * the target relation, and we want to be sure we see current ACL
+	 * information.  We can skip this if asked for NoLock, on the assumption
+	 * that such a call is not the first one in the current command, and so we
+	 * should be reasonably up-to-date already.  (XXX this all could stand to
+	 * be redesigned, but for the moment we'll keep doing this like it's been
+	 * done historically.)
 	 */
-	if (!token)
+	if (lockmode != NoLock)
+		AcceptInvalidationMessages();
+
+	/* Look up and lock the appropriate relation using namespace search */
+	relOid = RangeVarGetRelid(relation, lockmode, false);
+
+	/* Let relation_open do the rest */
+	return relation_open(relOid, NoLock);
+}
+
+/* ----------------
+ *		relation_openrv_extended - open any relation specified by a RangeVar
+ *
+ *		Same as relation_openrv, but with an additional missing_ok argument
+ *		allowing a NULL return rather than an error if the relation is not
+ *		found.  (Note that some other causes, such as permissions problems,
+ *		will still result in an ereport.)
+ * ----------------
+ */
+Relation
+relation_openrv_extended(const RangeVar *relation, LOCKMODE lockmode,
+						 bool missing_ok)
+{
+	Oid			relOid;
+
+	/*
+	 * Check for shared-cache-inval messages before trying to open the
+	 * relation.  See comments in relation_openrv().
+	 */
+	if (lockmode != NoLock)
+		AcceptInvalidationMessages();
+
+	/* Look up and lock the appropriate relation using namespace search */
+	relOid = RangeVarGetRelid(relation, lockmode, missing_ok);
+
+	/* Return NULL on not-found */
+	if (!OidIsValid(relOid))
+		return NULL;
+
+	/* Let relation_open do the rest */
+	return relation_open(relOid, NoLock);
+}
+
+/* ----------------
+ *		relation_close - close any relation
+ *
+ *		If lockmode is not "NoLock", we then release the specified lock.
+ *
+ *		Note that it is often sensible to hold a lock beyond relation_close;
+ *		in that case, the lock is released automatically at xact end.
+ * ----------------
+ */
+void
+relation_close(Relation relation, LOCKMODE lockmode)
+{
+	LockRelId	relid = relation->rd_lockInfo.lockRelId;
+
+	Assert(lockmode >= NoLock && lockmode < MAX_LOCKMODES);
+
+	/* The relcache does the real work... */
+	RelationClose(relation);
+
+	if (lockmode != NoLock)
+		UnlockRelationId(&relid, lockmode);
+}
+
+
+/* ----------------
+ *		heap_open - open a heap relation by relation OID
+ *
+ *		This is essentially relation_open plus check that the relation
+ *		is not an index nor a composite type.  (The caller should also
+ *		check that it's not a view or foreign table before assuming it has
+ *		storage.)
+ * ----------------
+ */
+Relation
+heap_open(Oid relationId, LOCKMODE lockmode)
+{
+	Relation	r;
+
+	r = relation_open(relationId, lockmode);
+
+	if (r->rd_rel->relkind == RELKIND_INDEX)
+		ereport(ERROR,
+				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+				 errmsg("\"%s\" is an index",
+						RelationGetRelationName(r))));
+	else if (r->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
+		ereport(ERROR,
+				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+				 errmsg("\"%s\" is a composite type",
+						RelationGetRelationName(r))));
+
+	return r;
+}
+
+/* ----------------
+ *		heap_openrv - open a heap relation specified
+ *		by a RangeVar node
+ *
+ *		As above, but relation is specified by a RangeVar.
+ * ----------------
+ */
+Relation
+heap_openrv(const RangeVar *relation, LOCKMODE lockmode)
+{
+	Relation	r;
+
+	r = relation_openrv(relation, lockmode);
+
+	if (r->rd_rel->relkind == RELKIND_INDEX)
+		ereport(ERROR,
+				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+				 errmsg("\"%s\" is an index",
+						RelationGetRelationName(r))));
+	else if (r->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
+		ereport(ERROR,
+				(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+				 errmsg("\"%s\" is a composite type",
+						RelationGetRelationName(r))));
+
+	return r;
+}
+
+/* ----------------
+ *		heap_openrv_extended - open a heap relation specified
+ *		by a RangeVar node
+ *
+ *		As above, but optionally return NULL instead of failing for
+ *		relation-not-found.
+ * ----------------
+ */
+Relation
+heap_openrv_extended(const RangeVar *relation, LOCKMODE lockmode,
+					 bool missing_ok)
+{
+	Relation	r;
+
+	r = relation_openrv_extended(relation, lockmode, missing_ok);
+
+	if (r)
 	{
-		ItemId			itemId = PageGetItemId(pageHeader, offnum);
-		HeapTupleHeader	onpage_tup = PageGetItemHeapHeaderOnly(pageHeader, itemId);
+		if (r->rd_rel->relkind == RELKIND_INDEX)
+			ereport(ERROR,
+					(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+					 errmsg("\"%s\" is an index",
+							RelationGetRelationName(r))));
+		else if (r->rd_rel->relkind == RELKIND_COMPOSITE_TYPE)
+			ereport(ERROR,
+					(errcode(ERRCODE_WRONG_OBJECT_TYPE),
+					 errmsg("\"%s\" is a composite type",
+							RelationGetRelationName(r))));
+	}
+
+	return r;
+}
 
-		onpage_tup->t_ctid = tuple->t_self;
+/*
+ *	heap_sync		- sync a heap, for use when no WAL has been written
+ *
+ * This forces the heap contents (including TOAST heap if any) down to disk.
+ * If we skipped using WAL, and WAL is otherwise needed, we must force the
+ * relation down to disk before it's safe to commit the transaction.  This
+ * requires writing out any dirty buffers and then doing a forced fsync.
+ *
+ * Indexes are not touched.  (Currently, index operations associated with
+ * the commands that use this are WAL-logged and so do not need fsync.
+ * That behavior might change someday, but in any case it's likely that
+ * any fsync decisions required would be per-index and hence not appropriate
+ * to be done here.)
+ */
+void
+heap_sync(Relation rel)
+{
+	/* non-WAL-logged tables never need fsync */
+	if (!RelationNeedsWAL(rel))
+		return;
+
+	/* main heap */
+	FlushRelationBuffers(rel);
+	/* FlushRelationBuffers will have opened rd_smgr */
+	smgrimmedsync(rel->rd_smgr, MAIN_FORKNUM);
+
+	/* FSM is not critical, don't bother syncing it */
+
+	/* toast heap, if any */
+	if (OidIsValid(rel->rd_rel->reltoastrelid))
+	{
+		Relation	toastrel;
+
+		toastrel = heap_open(rel->rd_rel->reltoastrelid, AccessShareLock);
+		FlushRelationBuffers(toastrel);
+		smgrimmedsync(toastrel->rd_smgr, MAIN_FORKNUM);
+		heap_close(toastrel, AccessShareLock);
 	}
 }
 
+
 /*
  * Read in a buffer, using bulk-insert strategy if bistate isn't NULL.
  */
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 3e07b10..c107d0a 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -724,7 +724,6 @@ heap_page_prune_execute(Buffer buffer,
 	PageRepairFragmentation(page);
 }
 
-
 /*
  * For all items in this page, find their respective root line pointers.
  * If item k is part of a HOT-chain with root at item j, then we set
diff --git a/src/backend/bootstrap/bootparse.y b/src/backend/bootstrap/bootparse.y
index 41d2fd4..cff766b 100644
--- a/src/backend/bootstrap/bootparse.y
+++ b/src/backend/bootstrap/bootparse.y
@@ -214,7 +214,7 @@ Boot_CreateStmt:
 							closerel(NULL);
 						}
 
-						boot_reldesc = heap_create($2,
+						boot_reldesc = relation_create($2,
 												   PG_CATALOG_NAMESPACE,
 												   shared_relation ? GLOBALTABLESPACE_OID : 0,
 												   $3,
@@ -231,7 +231,7 @@ Boot_CreateStmt:
 					{
 						Oid id;
 
-						id = heap_create_with_catalog($2,
+						id = relation_create_with_catalog($2,
 													  PG_CATALOG_NAMESPACE,
 													  shared_relation ? GLOBALTABLESPACE_OID : 0,
 													  $3,
diff --git a/src/backend/catalog/dependency.c b/src/backend/catalog/dependency.c
index 04d7840..f6bc9f3 100644
--- a/src/backend/catalog/dependency.c
+++ b/src/backend/catalog/dependency.c
@@ -1153,7 +1153,7 @@ doDeletion(const ObjectAddress *object, int flags)
 						RemoveAttributeById(object->objectId,
 											object->objectSubId);
 					else
-						heap_drop_with_catalog(object->objectId);
+						relation_drop_with_catalog(object->objectId);
 				}
 				break;
 			}
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index e997b57..ae8da25 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1,7 +1,7 @@
 /*-------------------------------------------------------------------------
  *
  * heap.c
- *	  code to create and destroy POSTGRES heap relations
+ *	  code to create and destroy POSTGRES relations
  *
  * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
  * Portions Copyright (c) 1994, Regents of the University of California
@@ -12,18 +12,11 @@
  *
  *
  * INTERFACE ROUTINES
- *		heap_create()			- Create an uncataloged heap relation
- *		heap_create_with_catalog() - Create a cataloged relation
- *		heap_drop_with_catalog() - Removes named relation from catalogs
+ *		relation_create()			- Create an uncataloged relation
+ *		relation_create_with_catalog() - Create a cataloged relation
+ *		relation_drop_with_catalog() - Removes named relation from catalogs
  *
  * NOTES
- *	  this code taken from access/heap/create.c, which contains
- *	  the old heap_create_with_catalog, amcreate, and amdestroy.
- *	  those routines will soon call these routines using the function
- *	  manager,
- *	  just like the poorly named "NewXXX" routines do.  The
- *	  "New" routines are all going to die soon, once and for all!
- *		-cim 1/13/91
  *
  *-------------------------------------------------------------------------
  */
@@ -230,7 +223,7 @@ SystemAttributeByName(const char *attname, bool relhasoids)
 
 
 /* ----------------------------------------------------------------
- *		heap_create		- Create an uncataloged heap relation
+ *		relation_create		- Create an uncataloged relation
  *
  *		Note API change: the caller must now always provide the OID
  *		to use for the relation.  The relfilenode may (and, normally,
@@ -241,7 +234,7 @@ SystemAttributeByName(const char *attname, bool relhasoids)
  * ----------------------------------------------------------------
  */
 Relation
-heap_create(const char *relname,
+relation_create(const char *relname,
 			Oid relnamespace,
 			Oid reltablespace,
 			Oid relid,
@@ -361,7 +354,7 @@ heap_create(const char *relname,
 }
 
 /* ----------------------------------------------------------------
- *		heap_create_with_catalog		- Create a cataloged relation
+ *		relation_create_with_catalog		- Create a cataloged relation
  *
  *		this is done in multiple steps:
  *
@@ -372,7 +365,7 @@ heap_create(const char *relname,
  *		   performs a scan to ensure that no relation with the
  *		   same name already exists.
  *
- *		3) heap_create() is called to create the new relation on disk.
+ *		3) relation_create() is called to create the new relation on disk.
  *
  *		4) TypeCreate() is called to define a new type corresponding
  *		   to the new relation.
@@ -981,7 +974,7 @@ AddNewRelationType(const char *typeName,
 }
 
 /* --------------------------------
- *		heap_create_with_catalog
+ *		relation_create_with_catalog
  *
  *		creates a new cataloged relation.  see comments above.
  *
@@ -1015,7 +1008,7 @@ AddNewRelationType(const char *typeName,
  * --------------------------------
  */
 Oid
-heap_create_with_catalog(const char *relname,
+relation_create_with_catalog(const char *relname,
 						 Oid relnamespace,
 						 Oid reltablespace,
 						 Oid relid,
@@ -1157,7 +1150,7 @@ heap_create_with_catalog(const char *relname,
 	 * disk file.  (If we fail further down, it's the smgr's responsibility to
 	 * remove the disk file again.)
 	 */
-	new_rel_desc = heap_create(relname,
+	new_rel_desc = relation_create(relname,
 							   relnamespace,
 							   reltablespace,
 							   relid,
@@ -1354,7 +1347,7 @@ heap_create_with_catalog(const char *relname,
 	{
 		Assert(relkind == RELKIND_RELATION || relkind == RELKIND_MATVIEW ||
 			   relkind == RELKIND_TOASTVALUE);
-		heap_create_init_fork(new_rel_desc);
+		relation_create_init_fork(new_rel_desc);
 	}
 
 	/*
@@ -1375,7 +1368,7 @@ heap_create_with_catalog(const char *relname,
  * this will hit the disk before the next checkpoint moves the redo pointer.
  */
 void
-heap_create_init_fork(Relation rel)
+relation_create_init_fork(Relation rel)
 {
 	RelationOpenSmgr(rel);
 	smgrcreate(rel->rd_smgr, INIT_FORKNUM, false);
@@ -1746,16 +1739,19 @@ RemoveAttrDefaultById(Oid attrdefId)
 }
 
 /*
- * heap_drop_with_catalog	- removes specified relation from catalogs
+ * relation_drop_with_catalog	- removes specified relation from catalogs
  *
  * Note that this routine is not responsible for dropping objects that are
  * linked to the pg_class entry via dependencies (for example, indexes and
  * constraints).  Those are deleted by the dependency-tracing logic in
  * dependency.c before control gets here.  In general, therefore, this routine
  * should never be called directly; go through performDeletion() instead.
+ *
+ * NOTE that index realtions require special threatment and must use index_drop
+ * instead.
  */
 void
-heap_drop_with_catalog(Oid relid)
+relation_drop_with_catalog(Oid relid)
 {
 	Relation	rel;
 
@@ -1793,7 +1789,7 @@ heap_drop_with_catalog(Oid relid)
 		if (!HeapTupleIsValid(tuple))
 			elog(ERROR, "cache lookup failed for foreign table %u", relid);
 
-		simple_heap_delete(rel, &tuple->t_self);
+		(rel, &tuple->t_self);
 
 		ReleaseSysCache(tuple);
 		heap_close(rel, RowExclusiveLock);
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 7b30e46..a1ff729 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -834,7 +834,7 @@ index_create(Relation heapRelation,
 	 * we fail further down, it's the smgr's responsibility to remove the disk
 	 * file again.)
 	 */
-	indexRelation = heap_create(indexRelationName,
+	indexRelation = relation_create(indexRelationName,
 								namespaceId,
 								tableSpaceId,
 								indexRelationId,
@@ -857,7 +857,7 @@ index_create(Relation heapRelation,
 
 	/*
 	 * Fill in fields of the index's pg_class entry that are not set correctly
-	 * by heap_create.
+	 * by relation_create.
 	 *
 	 * XXX should have a cleaner way to create cataloged indexes
 	 */
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index 564e10e..4314f3e 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -259,7 +259,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
 		binary_upgrade_next_toast_pg_type_oid = InvalidOid;
 	}
 
-	toast_relid = heap_create_with_catalog(toast_relname,
+	toast_relid = relation_create_with_catalog(toast_relname,
 										   namespaceid,
 										   rel->rd_rel->reltablespace,
 										   toastOid,
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index 43bbd90..a3d1fe7 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -660,7 +660,7 @@ make_new_heap(Oid OIDOldHeap, Oid NewTableSpace, char relpersistence,
 	 */
 	snprintf(NewHeapName, sizeof(NewHeapName), "pg_temp_%u", OIDOldHeap);
 
-	OIDNewHeap = heap_create_with_catalog(NewHeapName,
+	OIDNewHeap = relation_create_with_catalog(NewHeapName,
 										  namespaceid,
 										  NewTableSpace,
 										  InvalidOid,
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 86e9814..6ccc2ab 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -602,7 +602,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 	/*
 	 * Find columns with default values and prepare for insertion of the
 	 * defaults.  Pre-cooked (that is, inherited) defaults go into a list of
-	 * CookedConstraint structs that we'll pass to heap_create_with_catalog,
+	 * CookedConstraint structs that we'll pass to relation_create_with_catalog,
 	 * while raw defaults go into a list of RawColumnDefault structs that will
 	 * be processed by AddRelationNewConstraints.  (We can't deal with raw
 	 * expressions until we can do transformExpr.)
@@ -657,7 +657,7 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,
 	 * for immediate handling --- since they don't need parsing, they can be
 	 * stored immediately.
 	 */
-	relationId = heap_create_with_catalog(relname,
+	relationId = relation_create_with_catalog(relname,
 										  namespaceId,
 										  tablespaceId,
 										  InvalidOid,
@@ -1221,7 +1221,7 @@ ExecuteTruncate(TruncateStmt *stmt)
 			RelationSetNewRelfilenode(rel, rel->rd_rel->relpersistence,
 									  RecentXmin, minmulti);
 			if (rel->rd_rel->relpersistence == RELPERSISTENCE_UNLOGGED)
-				heap_create_init_fork(rel);
+				relation_create_init_fork(rel);
 
 			heap_relid = RelationGetRelid(rel);
 			toast_relid = rel->rd_rel->reltoastrelid;
@@ -1235,7 +1235,7 @@ ExecuteTruncate(TruncateStmt *stmt)
 				RelationSetNewRelfilenode(rel, rel->rd_rel->relpersistence,
 										  RecentXmin, minmulti);
 				if (rel->rd_rel->relpersistence == RELPERSISTENCE_UNLOGGED)
-					heap_create_init_fork(rel);
+					relation_create_init_fork(rel);
 				heap_close(rel, NoLock);
 			}
 
@@ -10629,7 +10629,7 @@ ATExecDropInherit(Relation rel, RangeVar *parent, LOCKMODE lockmode)
 /*
  * Drop the dependency created by StoreCatalogInheritance1 (CREATE TABLE
  * INHERITS/ALTER TABLE INHERIT -- refclassid will be RelationRelationId) or
- * heap_create_with_catalog (CREATE TABLE OF/ALTER TABLE OF -- refclassid will
+ * relation_create_with_catalog (CREATE TABLE OF/ALTER TABLE OF -- refclassid will
  * be TypeRelationId).  There's no convenient way to do this, so go trawling
  * through pg_depend.
  */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index b3a595c..6c6ab9f 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -76,7 +76,7 @@ typedef struct HeapUpdateFailureData
 /* ----------------
  *		function prototypes for heap access method
  *
- * heap_create, heap_create_with_catalog, and heap_drop_with_catalog
+ * relation_create, relation_create_with_catalog, and relation_drop_with_catalog
  * are declared in catalog/heap.h
  * ----------------
  */
diff --git a/src/include/catalog/heap.h b/src/include/catalog/heap.h
index b80d8d8..b5bd850 100644
--- a/src/include/catalog/heap.h
+++ b/src/include/catalog/heap.h
@@ -39,7 +39,7 @@ typedef struct CookedConstraint
 								 * inherited */
 } CookedConstraint;
 
-extern Relation heap_create(const char *relname,
+extern Relation relation_create(const char *relname,
 			Oid relnamespace,
 			Oid reltablespace,
 			Oid relid,
@@ -51,7 +51,7 @@ extern Relation heap_create(const char *relname,
 			bool mapped_relation,
 			bool allow_system_table_mods);
 
-extern Oid heap_create_with_catalog(const char *relname,
+extern Oid relation_create_with_catalog(const char *relname,
 						 Oid relnamespace,
 						 Oid reltablespace,
 						 Oid relid,
@@ -73,9 +73,9 @@ extern Oid heap_create_with_catalog(const char *relname,
 						 bool is_internal,
 						 ObjectAddress *typaddress);
 
-extern void heap_create_init_fork(Relation rel);
+extern void relation_create_init_fork(Relation rel);
 
-extern void heap_drop_with_catalog(Oid relid);
+extern void relation_drop_with_catalog(Oid relid);
 
 extern void heap_truncate(List *relids);
 

#2Thom Brown
thom@linux.com
In reply to: Anastasia Lubennikova (#1)
Re: Refactoring of heapam code.

On 5 August 2016 at 08:54, Anastasia Lubennikova
<a.lubennikova@postgrespro.ru> wrote:

Working on page compression and some other issues related to
access methods, I found out that the code related to heap
looks too complicated. Much more complicated, than it should be.
Since I anyway got into this area, I want to suggest a set of improvements.

There is a number of problems I see in the existing code:
Problem 1. Heap != Relation.

File heapam.c has a note:
* This file contains the heap_ routines which implement
* the POSTGRES heap access method used for all POSTGRES
* relations.
This statement is wrong, since we also have index relations,
that are implemented using other access methods.

Also I think that it's quite strange to have a function heap_create(), that
is called
from index_create(). It has absolutely nothing to do with heap access
method.

And so on, and so on.

One more thing that requires refactoring is "RELKIND_RELATION" name.
We have a type of relation called "relation"...

Problem 2.
Some functions are shared between heap and indexes access methods.
Maybe someday it used to be handy, but now, I think, it's time to separate
them, because IndexTuples and HeapTuples have really little in common.

Problem 3. The word "heap" is used in many different contexts.
Heap - a storage where we have tuples and visibility information.
Generally speaking, it can be implemented over any data structure,
not just a plain table as it is done now.

Heap - an access method, that provides a set of routines for plain table
storage, such as insert, delete, update, gettuple, vacuum and so on.
We use this access method not only for user's tables.

There are many types of relations (pg_class.h):
#define RELKIND_RELATION 'r' /* ordinary table */
#define RELKIND_INDEX 'i' /* secondary index */
#define RELKIND_SEQUENCE 'S' /* sequence object */
#define RELKIND_TOASTVALUE 't' /* for out-of-line
values */
#define RELKIND_VIEW 'v' /* view */
#define RELKIND_COMPOSITE_TYPE 'c' /* composite type */
#define RELKIND_FOREIGN_TABLE 'f' /* foreign table */
#define RELKIND_MATVIEW 'm' /* materialized view */

They can be logically separated into three categories:
"primary storage" - r, S, t, v. They store data and visibility information.
The only implementation is heapam.c
"secondary index" - i. They store some data and pointers to primary storage.
Various existing AMs and managed by AM API.
"virtual relations" - c, f, m. They have no physical storage, only entries
in caches and catalogs.

Now, for some reasons, we sometimes use name "heap" for both
"primary storage" and "virual relations". See src/backend/catalog/heap.c.
But some functions work only with "primary storage". See pgstat_relation().
And we constantly have to check relkind everywhere.
I'd like it would be clear from the code interface and naming.

So there is a couple of patches. They do not cover all mentioned problems,
but I'd like to get a feedback before continuing.

Patch 1:
There is a macro "PageGetItem" in bufpage.h that is used to retrieve an item
from the given page. It is used for both heap and index tuples.
It doesn't seems a problem, until you are going to find anything in this
code.

The first patch "PageGetItem_refactoring.patch" is intended to fix it.
It is just renaming:
(IndexTuple) PageGetItem ---> PageGetItemIndex
(HeapTupleHeader) PageGetItem ---> PageGetItemHeap
Other types of tuples (BrinTuple, SpGistInnerTuple, SpGistLeafTuple,
SpGistDeadTuple)
still use PageGetItem.

I also changed functions that do not access tuple's data, but only need
HeapTupleHeader fields. They use a macro PageGetItemHeapHeaderOnly.

I do not insist on these particular names, by the way.

Patch 2.
heapam.c, hio.c and src/backend/catalog/heap.c have a lot of confusing names
and outdated comments. The patch is intended to improve it.
Fun fact: I found several "check it later" comments unchanged since
"Postgres95 1.01 Distribution - Virgin Sources" commit.

I wonder if we can wind better name relation_drop_with_catalog() since,
again, it's
not about all kinds of relations. We use another function to drop index
relations.

I hope these changes will be useful for the future development.
Suggested patches are mostly about renaming and rearrangement of functions
between files, so, if nobody has conceptual objections, all I need from
reviewers
is an attentive look to find typos, grammar mistakes and overlooked areas.

Could you add this to the commitfest?

Thom

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#3Kevin Grittner
kgrittn@gmail.com
In reply to: Anastasia Lubennikova (#1)
Re: Refactoring of heapam code.

On Fri, Aug 5, 2016 at 2:54 AM, Anastasia Lubennikova
<a.lubennikova@postgrespro.ru> wrote:

They can be logically separated into three categories:
"primary storage" - r, S, t, v. They store data and visibility information.
The only implementation is heapam.c
"secondary index" - i. They store some data and pointers to primary storage.
Various existing AMs and managed by AM API.
"virtual relations" - c, f, m. They have no physical storage, only entries
in caches and catalogs.

Materialized views (relkind == "m") have physical storage, and may have indexes.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#4Anastasia Lubennikova
a.lubennikova@postgrespro.ru
In reply to: Kevin Grittner (#3)
Re: Refactoring of heapam code.

05.08.2016 16:30, Kevin Grittner:

On Fri, Aug 5, 2016 at 2:54 AM, Anastasia Lubennikova
<a.lubennikova@postgrespro.ru> wrote:

They can be logically separated into three categories:
"primary storage" - r, S, t, v. They store data and visibility information.
The only implementation is heapam.c
"secondary index" - i. They store some data and pointers to primary storage.
Various existing AMs and managed by AM API.
"virtual relations" - c, f, m. They have no physical storage, only entries
in caches and catalogs.

Materialized views (relkind == "m") have physical storage, and may have indexes.

Good point. I сonfused letters for views (virtual relations) and
materialized views (primary storage).
Should be:

"primary storage" - r, S, t, m. They store data and visibility information.
The only implementation is heapam.c
"secondary index" - i. They store some data and pointers to primary storage.
Various existing AMs and managed by AM API.
"virtual relations" - c, f, v. They have no physical storage, only entries
in caches and catalogs.

--
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#5Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Anastasia Lubennikova (#1)
Re: Refactoring of heapam code.

Anastasia Lubennikova wrote:

Working on page compression and some other issues related to
access methods, I found out that the code related to heap
looks too complicated. Much more complicated, than it should be.
Since I anyway got into this area, I want to suggest a set of improvements.

Hm. I'm hacking on heapam.c pretty heavily ATM. Your first patch
causes no problem I think, or if it does it's probably pretty minor, so
that's okay. I'm unsure about the second one -- I think it's okay too,
because it mostly seems to be about moving stuff from heapam.c to hio.c
and shuffling some code around that I don't think I'm modifying.

Also I think that it's quite strange to have a function heap_create(), that
is called
from index_create(). It has absolutely nothing to do with heap access
method.

Agreed. But changing its name while keeping it in heapam.c does not
really improve things enough. I'd rather have it moved elsewhere that's
not specific to "heaps" (somewhere in access/common perhaps). However,
renaming the functions would break third-party code for no good reason.
I propose that we only rename things if we also break it for other
reasons (say because the API changes in a fundamental way).

One more thing that requires refactoring is "RELKIND_RELATION" name.
We have a type of relation called "relation"...

I don't see us fixing this bit, really. Historical baggage and all
that. For example, a lot of SQL queries use "WHERE relkind = 'r'". If
we change the name, we should probably also change the relkind constant,
and that would break the queries. If we change the name and do not
change the constant, then we have to have a comment "we call them
RELKIND_TABLE but the char is r because it was called RELATION
previously", which isn't any better.

So there is a couple of patches. They do not cover all mentioned problems,
but I'd like to get a feedback before continuing.

I agree that we could improve things in this general area, but I do not
endorse any particular changes you propose in these patches; I haven't
reviewed your patches.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Michael Paquier
michael.paquier@gmail.com
In reply to: Alvaro Herrera (#5)
Re: Refactoring of heapam code.

On Sat, Aug 6, 2016 at 2:56 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:

Anastasia Lubennikova wrote:

So there is a couple of patches. They do not cover all mentioned problems,
but I'd like to get a feedback before continuing.

I agree that we could improve things in this general area, but I do not
endorse any particular changes you propose in these patches; I haven't
reviewed your patches.

Well, I am interested in the topic. And just had a look at the patches proposed.

+ /*
+ * Split PageGetItem into set of different macros
+ * in order to make code more readable.
+ */
+#define PageGetItemHeap(page, itemId) \
+( \
+   AssertMacro(PageIsValid(page)), \
+   AssertMacro(ItemIdHasStorage(itemId)), \
+   (HeapTupleHeader)(((char *)(page)) + ItemIdGetOffset(itemId)) \
+)
Placing into bufpage.h macros that are specific to heap and indexes is
not that much a good idea. I'd suggest having the heap ones in
heapam.h, and the index ones in a more generic place, as
src/access/common as already mentioned by Alvaro.

+ onpage_tup = PageGetItemHeapHeaderOnly(page, newitemid);
Just PageGetItemHeapHeader would be fine.

@@ -2324,7 +2087,6 @@ FreeBulkInsertState(BulkInsertState bistate)
pfree(bistate);
}

-
/*
* heap_insert - insert tuple into a heap
Those patches have some noise.

Patch 0002 is doing two things: reorganizing the order of the routines
in heapam.c and move some routines from heapam.c to hio.c. Those two
could be separate patches/

If the final result is to be able to extend custom AMs for tables, we
should have a structure like src/access/common/index.c,
src/access/common/table.c and src/access/common/relation.c for
example, and have headers dedicated to them. But yes, there is
definitely a lot of room for refactoring as we are aiming, at least I
guess so, at having more various code paths for access methods.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Anastasia Lubennikova
a.lubennikova@postgrespro.ru
In reply to: Alvaro Herrera (#5)
Re: Refactoring of heapam code.

05.08.2016 20:56, Alvaro Herrera:

Anastasia Lubennikova wrote:

Working on page compression and some other issues related to
access methods, I found out that the code related to heap
looks too complicated. Much more complicated, than it should be.
Since I anyway got into this area, I want to suggest a set of improvements.

Hm. I'm hacking on heapam.c pretty heavily ATM. Your first patch
causes no problem I think, or if it does it's probably pretty minor, so
that's okay. I'm unsure about the second one -- I think it's okay too,
because it mostly seems to be about moving stuff from heapam.c to hio.c
and shuffling some code around that I don't think I'm modifying.

I'm sure these patches will not cause any problems. The second one is
mostly about moving unrelated stuff from heapam.c to hio.c and renaming
couple of functions in heap.c.
Anyway, the are not a final solution just proof of concept.

By the way, I thought about looking at the mentioned patch and
probably reviewing it, but didn't find any of your patches on the
current commitfest. Could you point out the thread?

Also I think that it's quite strange to have a function heap_create(), that
is called
from index_create(). It has absolutely nothing to do with heap access
method.

Agreed. But changing its name while keeping it in heapam.c does not
really improve things enough. I'd rather have it moved elsewhere that's
not specific to "heaps" (somewhere in access/common perhaps). However,
renaming the functions would break third-party code for no good reason.
I propose that we only rename things if we also break it for other
reasons (say because the API changes in a fundamental way).

Yes, I agree that it should be moved to another file.
Just to be more accurate: it's not in heapam.c now, it is in
"src/backend/catalog/heap.c" which requires much more changes
than I did.

Concerning to the third-party code. It's a good observation and we
definitely should keep it in mind. Nevertheless, I doubt that there's a
lot of
external calls to these functions. And I hope this thread will end up with
fundamental changes introducing clear API for all mentioned problems.

One more thing that requires refactoring is "RELKIND_RELATION" name.
We have a type of relation called "relation"...

I don't see us fixing this bit, really. Historical baggage and all
that. For example, a lot of SQL queries use "WHERE relkind = 'r'". If
we change the name, we should probably also change the relkind constant,
and that would break the queries. If we change the name and do not
change the constant, then we have to have a comment "we call them
RELKIND_TABLE but the char is r because it was called RELATION
previously", which isn't any better.

Good point.
I'd rather change it to RELKIND_BASIC_RELATION or something like that,
which is more specific and still goes well with 'r' constant. But minor
changes
like that can wait until we clarify the API in general.

--
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#8Anastasia Lubennikova
a.lubennikova@postgrespro.ru
In reply to: Michael Paquier (#6)
Re: Refactoring of heapam code.

08.08.2016 03:51, Michael Paquier:

On Sat, Aug 6, 2016 at 2:56 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:

Anastasia Lubennikova wrote:

So there is a couple of patches. They do not cover all mentioned problems,
but I'd like to get a feedback before continuing.

I agree that we could improve things in this general area, but I do not
endorse any particular changes you propose in these patches; I haven't
reviewed your patches.

Well, I am interested in the topic. And just had a look at the patches proposed.

+ /*
+ * Split PageGetItem into set of different macros
+ * in order to make code more readable.
+ */
+#define PageGetItemHeap(page, itemId) \
+( \
+   AssertMacro(PageIsValid(page)), \
+   AssertMacro(ItemIdHasStorage(itemId)), \
+   (HeapTupleHeader)(((char *)(page)) + ItemIdGetOffset(itemId)) \
+)
Placing into bufpage.h macros that are specific to heap and indexes is
not that much a good idea. I'd suggest having the heap ones in
heapam.h, and the index ones in a more generic place, as
src/access/common as already mentioned by Alvaro.

+ onpage_tup = PageGetItemHeapHeaderOnly(page, newitemid);
Just PageGetItemHeapHeader would be fine.

@@ -2324,7 +2087,6 @@ FreeBulkInsertState(BulkInsertState bistate)
pfree(bistate);
}

-
/*
* heap_insert - insert tuple into a heap
Those patches have some noise.

Patch 0002 is doing two things: reorganizing the order of the routines
in heapam.c and move some routines from heapam.c to hio.c. Those two
could be separate patches/

If the final result is to be able to extend custom AMs for tables, we
should have a structure like src/access/common/index.c,
src/access/common/table.c and src/access/common/relation.c for
example, and have headers dedicated to them. But yes, there is
definitely a lot of room for refactoring as we are aiming, at least I
guess so, at having more various code paths for access methods.

Thank you for the review, I'll fix these problems in final version.

Posting the first message I intended to raise the discussion. Patches
were attached mainly to illustrate the problem and to be more concrete.

Thank you for attention and feedback.
Since there are no objections to the idea in general, I'll write an
exhaustive
README about current state of the code and then propose API design
to discuss details.

Stay tuned.

--
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Anastasia Lubennikova (#7)
Re: Refactoring of heapam code.

Anastasia Lubennikova wrote:

By the way, I thought about looking at the mentioned patch and
probably reviewing it, but didn't find any of your patches on the
current commitfest. Could you point out the thread?

Sorry, I haven't posted anything yet.

Agreed. But changing its name while keeping it in heapam.c does not
really improve things enough. I'd rather have it moved elsewhere that's
not specific to "heaps" (somewhere in access/common perhaps). However,
renaming the functions would break third-party code for no good reason.
I propose that we only rename things if we also break it for other
reasons (say because the API changes in a fundamental way).

Yes, I agree that it should be moved to another file.
Just to be more accurate: it's not in heapam.c now, it is in
"src/backend/catalog/heap.c" which requires much more changes
than I did.

Argh. Clearly, the code organization in this area is not good at all.

--
�lvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#10Anastasia Lubennikova
a.lubennikova@postgrespro.ru
In reply to: Anastasia Lubennikova (#8)
Re: Refactoring of heapam code.

08.08.2016 12:43, Anastasia Lubennikova:

08.08.2016 03:51, Michael Paquier:

On Sat, Aug 6, 2016 at 2:56 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Anastasia Lubennikova wrote:

So there is a couple of patches. They do not cover all mentioned
problems,
but I'd like to get a feedback before continuing.

I agree that we could improve things in this general area, but I do not
endorse any particular changes you propose in these patches; I haven't
reviewed your patches.

Well, I am interested in the topic. And just had a look at the
patches proposed.

+ /*
+ * Split PageGetItem into set of different macros
+ * in order to make code more readable.
+ */
+#define PageGetItemHeap(page, itemId) \
+( \
+   AssertMacro(PageIsValid(page)), \
+   AssertMacro(ItemIdHasStorage(itemId)), \
+   (HeapTupleHeader)(((char *)(page)) + ItemIdGetOffset(itemId)) \
+)
Placing into bufpage.h macros that are specific to heap and indexes is
not that much a good idea. I'd suggest having the heap ones in
heapam.h, and the index ones in a more generic place, as
src/access/common as already mentioned by Alvaro.

+ onpage_tup = PageGetItemHeapHeaderOnly(page, newitemid);
Just PageGetItemHeapHeader would be fine.

@@ -2324,7 +2087,6 @@ FreeBulkInsertState(BulkInsertState bistate)
pfree(bistate);
}

-
/*
* heap_insert - insert tuple into a heap
Those patches have some noise.

Patch 0002 is doing two things: reorganizing the order of the routines
in heapam.c and move some routines from heapam.c to hio.c. Those two
could be separate patches/

If the final result is to be able to extend custom AMs for tables, we
should have a structure like src/access/common/index.c,
src/access/common/table.c and src/access/common/relation.c for
example, and have headers dedicated to them. But yes, there is
definitely a lot of room for refactoring as we are aiming, at least I
guess so, at having more various code paths for access methods.

Thank you for the review, I'll fix these problems in final version.

Posting the first message I intended to raise the discussion. Patches
were attached mainly to illustrate the problem and to be more concrete.

Thank you for attention and feedback.
Since there are no objections to the idea in general, I'll write an
exhaustive
README about current state of the code and then propose API design
to discuss details.

Stay tuned.

Here is the promised design draft.
https://wiki.postgresql.org/wiki/HeapamRefactoring

Looking forward to your comments.

--
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Anastasia Lubennikova (#8)
Re: Refactoring of heapam code.

On Mon, Aug 8, 2016 at 3:13 PM, Anastasia Lubennikova <
a.lubennikova@postgrespro.ru> wrote:

Thank you for the review, I'll fix these problems in final version.

Posting the first message I intended to raise the discussion. Patches
were attached mainly to illustrate the problem and to be more concrete.

I started looking at the first patch posted above, but it seems you'll
rewrite these patches after discussion on the heapam refactoring ideas you
posted here https://wiki.postgresql.org/wiki/HeapamRefactoring concludes.

Some comments anyways on the first patch:

1. It does not apply cleanly with git-apply - many white space errors
2. I don't understand the difference between PageGetItemHeapHeaderOnly()
and PageGetItemHeap(). They seem to do exactly the same thing and can be
used interchangeably.
3. While I like the idea of using separate interfaces to get heap/index
tuple from a page, may be they can internally use a common interface
instead of duplicating what PageGetItem() does already.
4. Should we rename PageGetItemHeap() to PageGetHeapTuple() or something
like that?
5. If we do this, we should probably have corresponding wrapper
functions/macros for remaining callers of PageGetItem()

I also looked at the refactoring design doc. Looks like a fine approach to
me, but the API will need much more elaborate discussions. I am not sure if
the interfaces as presented are enough to support everything that even
heapam can do today. And much more thoughts will be required to ensure we
don't miss out things that new primary access method may need.

A few points regarding how the proposed API maps to heapam:

- How do we support parallel scans on heap?
- Does the primary AM need to support locking of rows?
- Would there be separate API to interpret the PAM tuple itself? (something
that htup_details.h does today). It seems natural that every PAM will have
its own interpretation of tuple structure and we would like to hide that
inside PAM implementation.
- There are many upper modules that need access to system attributes (xmin,
xmax for starters). How do you plan to handle that? You think we can
provide enough abstraction so that the callers don't need to know the tuple
structures of individual PAM?

I don't know what to do with the CF entry itself. I could change the status
to "waiting on author" but then the design draft may not get enough
attention. So I'm leaving it in the current state for others to look at.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#12Anastasia Lubennikova
a.lubennikova@postgrespro.ru
In reply to: Pavan Deolasee (#11)
Re: Refactoring of heapam code.

06.09.2016 07:44, Pavan Deolasee:

On Mon, Aug 8, 2016 at 3:13 PM, Anastasia Lubennikova
<a.lubennikova@postgrespro.ru <mailto:a.lubennikova@postgrespro.ru>>
wrote:

Thank you for the review, I'll fix these problems in final version.

Posting the first message I intended to raise the discussion. Patches
were attached mainly to illustrate the problem and to be more
concrete.

I started looking at the first patch posted above, but it seems you'll
rewrite these patches after discussion on the heapam refactoring ideas
you posted here https://wiki.postgresql.org/wiki/HeapamRefactoring
<https://wiki.postgresql.org/wiki/HeapamRefactoring&gt; concludes.

Thank you for the review.

Some comments anyways on the first patch:

1. It does not apply cleanly with git-apply - many white space errors

I'll fix all merge issues and attach it to the following message.

2. I don't understand the difference
between PageGetItemHeapHeaderOnly() and PageGetItemHeap(). They seem
to do exactly the same thing and can be used interchangeably.

The only difference between these two macros is that
PageGetItemHeapHeaderOnly() doesn't touch any key fields of a tuple,
it only checks header fields (see HeapTupleHeaderData). I divided it into
two separate functions, while I was working on page compression and
I wanted to avoid unnecessary decompression calls. These names are
just a kind of 'markers' to make the code reading and improving easier.

3. While I like the idea of using separate interfaces to get
heap/index tuple from a page, may be they can internally use a common
interface instead of duplicating what PageGetItem() does already.

I don't sure I get it right. Do you suggest to leave PageGetItem and write
PageGetItemHeap() and PageGetItemIndex() as wrappers on it?
It sounds reasonable while we have similar layout for both heap and
index pages.
In any case, it'll be easy to separate them when necessary.

4. Should we rename PageGetItemHeap() to PageGetHeapTuple() or
something like that?

I don't feel like doing that, because HeapTuple is a different structure.
What we do get from page is a HeapTupleHeaderData structure
followed by user's data.

5. If we do this, we should probably have corresponding wrapper
functions/macros for remaining callers of PageGetItem()

There are some calls of PageGetItem() left in BRIN and SP-gist indexes,
I can write simple wrappers for them.
I left them just because they were out of interest for my compression
prototype.

I also looked at the refactoring design doc. Looks like a fine
approach to me, but the API will need much more elaborate discussions.
I am not sure if the interfaces as presented are enough to support
everything that even heapam can do today.

What features of heapam do you think could be unsupportable in this API?
Maybe I've just missed them.

My point here is that heapam is too complicated for many simpler use-cases
read-only tables and append-only tables do not need many MVCC related tricks
like vacuum and complicated locking, tables with fixed len attributes
can use
more optimal page layout. And so on.

I suggest refactoring, that will allow us to develop new heap-like
access methods.
For the first version, they must have methods to "heapify" tuple i.e
turn internal
representation of the data to regular HeapTuple, for example put some
stubs into
MVCC fields. Of course this approach has its disadvantages, such as
performance issues.
It definitely won't be enough to write column storage or to implement
other great
data structures. But it allows us not to depend of the Executor's code.

And much more thoughts will be required to ensure we don't miss out
things that new primary access method may need.

Do you have any particular ideas of new access methods?

A few points regarding how the proposed API maps to heapam:

- How do we support parallel scans on heap?

As far as I understand, parallel heap scan requires one more API function
heap_beginscan_parallel(). It uses the same API function - heap_getnext(),
but in addition to ordinary seq scan it selects page to read in a
synchronized manner.

- Does the primary AM need to support locking of rows?

That's a good question. I'd say it should be an option.

- Would there be separate API to interpret the PAM tuple itself?
(something that htup_details.h does today). It seems natural that
every PAM will have its own interpretation of tuple structure and we
would like to hide that inside PAM implementation.

As I already wrote, for the first prototype, I'd use function to "heapify"
tuple and don't care about executor issues at all. More detailed discussion
of this question you can find in another thread [1]http://postgresql.nabble.com/Pluggable-storage-td5916322.html.

- There are many upper modules that need access to system attributes
(xmin, xmax for starters). How do you plan to handle that? You think
we can provide enough abstraction so that the callers don't need to
know the tuple structures of individual PAM?

To be honest, I didn't thought about it.
Do you mean external modules or upper levels of abstraction in Postgres?
I think that backward compatibility support will be pretty complicated.
Could you provide any examples?

I don't know what to do with the CF entry itself. I could change the
status to "waiting on author" but then the design draft may not get
enough attention. So I'm leaving it in the current state for others to
look at.

I'll try to update patches as soon as possible. Little code cleanup will
be useful
regardless of final refactoring design.

[1]: http://postgresql.nabble.com/Pluggable-storage-td5916322.html

--
Anastasia Lubennikova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

#13Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Anastasia Lubennikova (#12)
Re: Refactoring of heapam code.

On Tue, Sep 6, 2016 at 8:39 PM, Anastasia Lubennikova <
a.lubennikova@postgrespro.ru> wrote:

06.09.2016 07:44, Pavan Deolasee:

2. I don't understand the difference between PageGetItemHeapHeaderOnly()
and PageGetItemHeap(). They seem to do exactly the same thing and can be
used interchangeably.

The only difference between these two macros is that
PageGetItemHeapHeaderOnly() doesn't touch any key fields of a tuple,
it only checks header fields (see HeapTupleHeaderData). I divided it into
two separate functions, while I was working on page compression and
I wanted to avoid unnecessary decompression calls. These names are
just a kind of 'markers' to make the code reading and improving easier.

Ok. I still don't get it, but that's probably because I haven't seen a real
use case of that. Right now, both macros look exactly the same.

3. While I like the idea of using separate interfaces to get heap/index
tuple from a page, may be they can internally use a common interface
instead of duplicating what PageGetItem() does already.

I don't sure I get it right. Do you suggest to leave PageGetItem and write
PageGetItemHeap() and PageGetItemIndex() as wrappers on it?
It sounds reasonable while we have similar layout for both heap and index
pages.
In any case, it'll be easy to separate them when necessary.

Yes, that's what I was thinking.

4. Should we rename PageGetItemHeap() to PageGetHeapTuple() or something
like that?

I don't feel like doing that, because HeapTuple is a different structure.
What we do get from page is a HeapTupleHeaderData structure
followed by user's data.

Ok, makes sense.

I also looked at the refactoring design doc. Looks like a fine approach to
me, but the API will need much more elaborate discussions. I am not sure if
the interfaces as presented are enough to support everything that even
heapam can do today.

What features of heapam do you think could be unsupportable in this API?
Maybe I've just missed them.

I was thinking about locking, bulk reading (like page-mode API) etc. While
you've added an API for vacuuming, we would probably also need an API to
collect dead tuples, pruning etc (not sure if that can be combined with
vacuum). Of course, these are probably very specific to current
implementation of heap/MVCC and not all storages will need that.

I suggest refactoring, that will allow us to develop new heap-like access
methods.
For the first version, they must have methods to "heapify" tuple i.e turn
internal
representation of the data to regular HeapTuple, for example put some
stubs into
MVCC fields. Of course this approach has its disadvantages, such as
performance issues.
It definitely won't be enough to write column storage or to implement
other great
data structures. But it allows us not to depend of the Executor's code.

Ok, understood.

- There are many upper modules that need access to system attributes
(xmin, xmax for starters). How do you plan to handle that? You think we can
provide enough abstraction so that the callers don't need to know the tuple
structures of individual PAM?

To be honest, I didn't thought about it.
Do you mean external modules or upper levels of abstraction in Postgres?

I meant upper levels of abstraction like the executor. For example, while
inserting a new tuple, the executor (the index AM's insert routine to be
precise) may need to wait for another transaction to finish. Today, it can
easily get that information by looking at the xmax of the conflicting
tuple. How would we handle that with abstraction since not every PAM will
have a notion of xmax?

Thanks,
Pavan

Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#14Michael Paquier
michael.paquier@gmail.com
In reply to: Pavan Deolasee (#13)
Re: Refactoring of heapam code.

On Mon, Sep 12, 2016 at 7:12 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I was thinking about locking, bulk reading (like page-mode API) etc. While
you've added an API for vacuuming, we would probably also need an API to
collect dead tuples, pruning etc (not sure if that can be combined with
vacuum). Of course, these are probably very specific to current
implementation of heap/MVCC and not all storages will need that.

Likely not, but it is likely that people would like to be able to get
that if some custom method needs it. I think that what would be a good
first step here is to brainstorm a potential set of custom AMs for
tables, get the smallest, meaningful, one from the set of ideas
proposed, and define a basic set of relation/storage/table AM routines
that we can come up to support it. And then build up a prototype using
it. We have been talking a lot about refactoring and reordering stuff,
which is something that would be good in the long term to make things
more generic with heap, but getting an idea of "we-want-that" may
prove to help in designing a minimum set if features that we'd like to
have here.

This would likely need anyway to extend CREATE TABLE to be able to
pass a custom AM for a given relation and store in pg_catalog, but
that's a detail in the whole picture...
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#15Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Anastasia Lubennikova (#12)
Re: Refactoring of heapam code.

On Tue, Sep 6, 2016 at 8:39 PM, Anastasia Lubennikova <
a.lubennikova@postgrespro.ru> wrote:

06.09.2016 07:44, Pavan Deolasee:

I don't know what to do with the CF entry itself. I could change the
status to "waiting on author" but then the design draft may not get enough
attention. So I'm leaving it in the current state for others to look at.

I'll try to update patches as soon as possible. Little code cleanup will
be useful
regardless of final refactoring design.

I'm marking the patch as "Returned with Feedback". I assume you'll submit
fresh set of patches for the next CF anyways.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services